I started with VB6 so I'm sometimes nostalgic for it too but let's not kid ourselves.
We might take it for granted but React-like declarative top-down component model (as opposed to imperative UI) was a huge step forward. In particular that there's no difference between initial render or a re-render, and that updating state is enough for everything to propagate down. That's why it went beyond web, and why all modern native UI frameworks have a similar model these days.
> and why all modern native UI frameworks have a similar model these days.
Personally I much rather the approach taken by solidjs / svelte.
React’s approach is very inefficient - the entire view tree is rerendered when any change happens. Then they need to diff the new UI state with the old state and do reconciliation. This works well enough for tiny examples, but it’s clunky at scale. And the code to do diffing and reconciliation is insanely complicated. Hello world in react is like 200kb of javascript or something like that. (Smaller gzipped, but the browser still needs to parse it all at startup). And all of that diffing is also pure overhead. It’s simply not needed.
The solidjs / react model uses the compiler to figure out how variables changing results in changes to the rendered view tree. Those variables are wrapped up as “observed state”. As a result, you can just update those variables and exactly and only the parts of the UI that need to be changed will be redrawn. No overrendering. No diffing. No virtual Dom and no reconciliation. Hello world in solid or svelte is minuscule - 2kb or something.
Unfortunately, swiftui has copied react. And not the superior approach of newer libraries.
The rust “Leptos” library implements this same fine grained reactivity, but it’s still married to the web. I’m really hoping someone takes the same idea and ports it to desktop / native UI.
>React’s approach is very inefficient - the entire view tree is rerendered when any change happens.
That's not true. React only re-renders down from where the update happens. And it skips over stuff that is provably unchanged -- which, fair, involves manual memoization hints. Although with React Compiler it's actually pretty good at automatically adding those so in practice it mostly re-renders along the actually changed path.
>And the code to do diffing and reconciliation is insanely complicated.
It's really not, the "diffing" is relatively simple and is maybe ~2kloc of repetitive functions (one per component kind) in the React source code. Most of complexity of React is elsewhere.
>The solidjs / react model uses the compiler to figure out how variables changing results in changes to the rendered view tree.
I actually count those as "React-like" because it's still declarative componentized top-down model unlike say VB6.
> That's not true. React only re-renders down from where the update happens. And it skips over stuff that is provably unchanged -- which, fair, involves manual memoization hints.
React only skips over stuff that's provably unchanged. But in many - most? web apps, it rerenders a lot. Yeah, you can add memoization hints. But how many people actually do that? I've worked on several react projects, and I don't think I've ever seen anyone manually add memoization hints.
To be honest it seems a bit like Electron. People who really know what they're doing can get decent performance. But the average person working with react doesn't understand how react works very well at all. And the average react website ends up feeling slow.
> Most of complexity of React is elsewhere.
Where is the rest of the complexity of react? The uncompressed JS bundle is huge. What does all that code even do?
> I actually count [solidjs / svelte] as "React-like" because it's still declarative componentized top-down model unlike say VB6.
Yeah, in the sense that Solidjs and svelte iterate on react's approach to application development. They're kinda React 2.0. Its fair to say they borrow a lot of ideas from react. And they wouldn't exist without react. But there's also a lot of differences. SolidJS and Svelte implement react's developer ergonomics, while having better performance and a web app download size that is many times smaller. Automatic fine grained reactivity means no virtual dom, no vdom diffing and no manual memoization or anything like that.
They also have a trick that react is missing: Your component can just have variables again. SolidJS looks like react, but your component is only executed once per instance in the page. Updates don't throw anything away. As a result, you don't need special react state / hooks / context / redux / whatever. You can mostly just use actual variables. Its lovely. (Though you will need a solidjs store if you want your page to react to variables being updated).
>React only skips over stuff that's provably unchanged. But in many - most? web apps, it rerenders a lot. Yeah, you can add memoization hints. But how many people actually do that?
Even without any hints, it doesn't re-render "the entire view tree" like your parent comment claims, but only stuff below the place that's updated. E.g. if you're updating a text box, only stuff under the component owning that text box's state is considered for reconciliation.
Re: manual memoization hints, I'm not sure what you mean — `useMemo` and `useCallback` are used all over the place in React projects, often unnecessarily. It's definitely something that people do a lot. But also, React Compiler does this automatically, so assuming it gets wider adoption, in the longer run manual hints aren't necessary anyway.
>Where is the rest of the complexity of react?
It's kind of spread around, I wouldn't say it's one specific piece. There's some complexity in hydration (for reviving HTML), declarative loading states (Suspense), interruptible updates (Transitions), error recovery (Error Boundaries), soon animations (View Transitions), and having all these features work with each other cohesively.
I used to work on React, so I'm familiar with what those other libraries do. I understand the things you enjoy about Solid. My bigger point is just that it's still a very different programming model as VB6 and such.
Thanks for your work on react. I just realised who I’m talking to sweats. I agree that the functional reactive model is a very different programming model than VB6. We all owe a lot to react, even though I personally don’t use the react library itself any more. But it does seem a pity to me how many sloppy, bloated websites out there are built on top of react. And how SwiftUI and others seem to be trying to copy react rather than copy its newer, younger siblings which had a chance to learn from some of react’s choices and iterate on them.
UI libraries aside, I’d really love to see the same reactive programming pattern applied to a compiler. Done well, I’m convinced we should be able to implement sub-millisecond patching of a binary as I chance my code.
I’m so tired of reading LLM slop articles. I don’t mind someone using AI assistance but it should be embarrassing to put your name next to something you so obviously didn’t write.
I don’t remember who said it but I really like this summary: posting LLM slop as your own writing destroys the reader/writer contract. Normally you’d expect the writer to have spent more effort on a piece than the reader. But now the reader is the one who’s spending more effort, trying to interpret a chain of words from nobody’s mind.
I am a former D-list tech blogger, and the thought of posting slop under my name horrifies me. But then again, I consider myself an author who has enjoyed the pleasant side-effect of minor notability. I never considered myself an influencer who happened to use writing to acquire more influence.
Anybody shipping slop around—whether written by interns and published under their name or written by machines—is not an author. They are an influencer, and reposting slop is what they do.
The article is certainly shallow, and its title is clickbait, and it says things that will make some web developers roll their eyes, and of course LLMs are now available to anyone — but what makes you think this particular article was written by an LLM? What are the telltale signs?
It’s more of a vibe, as they say :) Things that cumulatively feel off: overly descriptive headers, overuse of flowery language (“we’re entering a new age where A is B, where X coexist peacefully with Y”). Lots of “isn’t X but Y”, “not X, just Y”. In general the rhythm and the tone is a tell (authoritative and scoldy but vapid and bland at the same time).
This is a really good explanation, but it reinforces my understanding that these “junk maths” are literally undefined behavior as in C and such. They are not defined (in maths), you are not supposed to trigger them, so they can be anything. Great…
This is horrible for a language whose whole purpose I thought was that to be foolproof and that if it compiles its true. Having very subtly different definitions of common operations is such a footgun.
Of course, I understand that this doesn’t bother mathematicians because they are used to not having any guardrails anyways. Just like C programmers have the attitude that if you fall on such a trap, you deserve it and you are not a “real programmer”. But Lean is supposed to be the other extreme isn’t it? Take nothing for granted and verify it from the ground up.
I suppose I am falling for that “Twitter confusion” the post is referring to. I never had any issues with this when actually using Lean. I just don’t like the burden of having to be paranoid about it, I thought Lean had my back and I could use it fairly mechanically by transforming abstract structures without thinking about the underlying semantics too much.
Anyway, despite the annoyance, I do assume that the designers know better and that it is a pragmatic and necessary compromise if it’s such a common pattern. But there must be a better solution, if having the exception makes it uncomfortable to prove, then design the language so that it is comfortable to prove such a thing. Don’t just remove the exception because 99% of the time it doesn’t matter. If we are happy with 99% we wouldn’t be reaching for formal verification, there are much more practical means to check correctness.
There is still a guardrail. The blog post explains that it is just using different functions and notation which might allow things like 0/0. But at the end of the day, different notation still cannot be used to prove false things.
In other words, you can use all these junk theorems to build strange results on the side, but you can never build something that disagrees with normal math or that contradicts itself. There is no footgun, because the weird results you obtain are just notation. They look weird to a human, but they don't allow you to actually break any rules or to prove 1=0.
I understand that, but if "/", and other common operators, don't mean what they means on paper, you can prove things that would be untrue if copied onto paper (kinda). You can indeed prove "1/0 = 0", which is not that far off from redefining "=" and proving "1=0".
More importantly, the other way around, it seems too easy to copy a proposition from paper onto Lean and falsely prove it without realising they don't express the same thing. A human probably wouldn't but there's increased usage of AI and other automatic methods with Lean.
I do understand I'm being purist and that it doesn't matter that much in practice. I've used Lean seriously for a while and I've never encountered any of this.
Thank you! This hit the nail on the head for me, though I probably need to try out a few more examples to fully convince myself.
TL;DR: It's actually harmless (and often convenient) to "inflate" the domains of partial functions to make them total (by making them return arbitrary junk values where the original function is undefined), provided that every theorem you want to apply still comes with the original, full restrictions.
Kevin's example is good. My stupider example would be: We can define a set that contains the integers ..., -2, -1, 0, 1, 2, ..., plus the extra element "banana". If we define the result of any addition, subtraction or multiplication involving a banana to be 42, and to have their usual results otherwise, then, provided that we add the condition "None of the variables involved is banana" to the theorem "x+y = y+x", and to every other theorem about arithmetic, anything that we can prove about arithmetic on elements of this set is also true of arithmetic on integers.
The setting is mostly cosmetic and only affects the Bluesky official app and web interface. People do find this setting helpful for curbing external waves of harassment (less motivated people just won't bother making an account), but the data is public and is available on the AT protocol: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...
So nothing is stopping LLMs from training on that data per se.
That's assuming that AI companies are gathering data in a smart way. The entire MusicBrainz database can be downloaded for free but AI scrapers are still attempting to scrape it one HTML page at a time, which often leads into the service having errors and/or slowdowns.
Yea that’s true. I’m just saying if someone wants to put in a modicum of effort, AT ecosystem is highly scrapable by design. In fact apps themselves (like Bluesky) are essentially scrapers.
>This is done for the same reason Threads blocks all access without a login and mostly twitter to. Its to force account creation, collection of user data and support increased monetization.
I worked at Bluesky when the decision to add this setting was made, and your assessment of why it was added is wrong.
The historical reason it was added is because early on the site had no public web interface at all. And by the time it was being added, there was a lot of concern from the users who misunderstood the nature of the app (despite warnings when signing up that all data is public) and who were worried that suddenly having a low-friction way to view their accounts would invite a wave of harassment. The team was very torn on this but decided to add the user-controlled ability to add this barrier, off by default.
Obviously, on a public network, this is still not a real gate (as I showed earlier, you can still see content through any alternative apps). This is why the setting is called "Discourage apps from showing my account to logged-out users" and it has a disclaimer:
>Bluesky is an open and public network. This setting only limits the visibility of your content on the Bluesky app and website, and other apps may not respect this setting. Your content may still be shown to logged-out users by other apps and websites.
Still, in practice, many users found this setting helpful to limit waves of harassment if a post of theirs escaped containment, and the setting was kept.
The Bluesky app respects Rob's setting (which is off by default) to not show his posts to logged out users, but fundamentally the protocol is for public data, so you can access it.
I mean "protocol" in the sense of "wire protocol" or "serialization format". Is that clearer?
>And quite "weird" decision, as some of people implementing this detail also work for Vercel, the one benefiting the most from the undocumented apis.
There is no dependency on the wire protocol in any of Vercel's code. (It wouldn't work since it breaks between versions and would be very fragile to do. That's the whole point of doing it as an implementation detail of React.)
The protocol (de)serializer is in the React repo and is 100% open source. It was designed by the person who led the React team at Meta, and it was created before Vercel contributed any code to React.
"Test the tests" is a big ask for many complex software projects.
Most human-driven coding + testing takes heavy advantage of being white-box testing.
For open-ended complex-systems development turning everything into black-box testing is hard. The LLMs, as noted in the post, are good at trying a lot of shit and inadvertently discovering stuff that passes incomplete tests without fully working. Or if you're in straight-up yolo mode, fucking up your test because it misunderstood the assignment, my personal favorite.
We already know it's very hard to have exhaustive coverage for unexpected input edge cases, for instance. The stuff of a million security bugs.
So as the combinatorial surface of "all possible actions that can be taken in the system in all possible orders" increases because you build more stuff into your system, so does the difficulty of relying on LLMs looping over prompts until tests go green.
reply