Am I the only one that thinks the vanilla js example is actually easier to read and work with?
- "The setup is noise and boilerplate heavy." Actually the signals example looks just as noisy and boilerplate heavy to me. And it introduces new boilerplate concepts which are hard for beginners to understand.
- "If the counter changes but parity does not (e.g. counter goes from 2 to 4), then we do unnecessary computation of the parity and unnecessary rendering." - Sounds like they want premature memoization.
- "What if another part of our UI just wants to render when the counter updates?" Then I agree the strawman example is probably not what you want. At that point you might want to handle the state using signals, event handling, central state store (e.g. redux-like tools), or some other method. I think this is also what they meant by "The counter state is tightly coupled to the rendering system."? Some of this document feels a little repetitive.
- "What if another part of our UI is dependent on isEven or parity alone?" Sure, you could change your entire approach because of this if that's a really central part of your app, but most often it's not. And "The render function, which is only dependent on parity must instead "know" that it actually needs to subscribe to counter." is often not an unreasonable obligation. I mean, that's one of the nice things about pure computed functions- it's easy to spot their inputs.
Why do you think this is premature memoization? This is an example, boiled down to a simple function. Do you think people just came up with the use case for this without ever having needed it?
I think an effort in standardizing signals, a concept that is increasingly used in UI development is a laudable effort. I don't want to get into the nitty gritty about what is too much boilerplate and whether you should build an event system or not, but since signals are something that is used in a variety of frameworks, there might be a good reason to it? And why not make an effort and standardize them over time?
While they share the same name, and are both reactive primitives, there are some fairly key differences between these signals and the QT signals and slots mechanism.
The main one is that QT signals are, as far as I understand, a fairly static construct - as you construct the various components of the application, you also construct the reactive graph. This graph might be updated over time, but usually when components are mounted and unmounted. JS signals, however, are built fresh every time they are executed, which makes them much more dynamic.
In addition, dependencies in JS signals are automatic rather than needing to be explicitly defined. There's no need to call a function like connect, addEventListener, or subscribe, you just call the original signal within the context of a computation, and the computation will subscribe to that signal.
Thirdly, in JS signals, you don't necessarily need to have a signal object to be able to subscribe to that signal. You can build an abstraction that doesn't necessarily expose the signal value itself, and instead provides getter functions that may call the underlying signal getter. And this same abstraction can be used both inside and outside of other reactive computations.
So on the one hand, yes, JS signals are just another reactivity tool and therefore will share features with many existing tools like signals and slots, observables, event emitters, and so on. But within that space, they are also a meaningful difference in how that reactivity occurs and is used.
This is an interesting topic so I tried to dive in a bit.
From my reading I understood that Qt signals & slots (and Qt events) are much more closely related to JavaScript events (native and custom).
In both you can explicitly emit, handle, listen to events/signals. JavaScript events seem to combine both Qt signals & slots and Qt events. Of course without the type safety.
"Signals are emitted by objects when they change their state in a way that may be interesting to other objects."
However what I think they are proposing in the article is a much more complex abstraction: they want to automate it so that whenever any part of a complex graph of states changes, every piece of code depending on that specific state gets notified, without the programmer explicitly writing code to notify other pieces of code, or doing connect() or addEventListener() etc.
What are your thoughts on that? I'd be interested to hear since I'm sure you have more experience than me.
This sounds interesting. The code examples reminded me of Qt signals but all the answers to my post suggest that JS signals would be much more powerful. Honestly, I'd need to take a closer look.
JS signals come from functional reactive programming, which is a generalization of synchronous reactive programming from the Lustre and Esterel programming languages from the 80s and 90s. I believe the first version was FrTime published in 2004.
You can think of reactive signals as combining an underlying event system with value construction, ultimately defining an object graph that updates itself whenever any of the parameters used to construct it change. You can think of this graph like an electronic circuit with multiple inputs and outputs, and like a circuit, the outputs update whenever inputs change.
The rationale for it is the fact that multiple frameworks provide their own versions of this mechanism. The proposal is to relocate extremely popular and common functionality from framework space to the language/runtime space. The popularity of React is itself the rationale for the utility of this idea, and any terse version of the rationale is for show. Is that a good enough rationale? Maybe, maybe not, but you are shooting the messenger.
Most importantly: OP is right re: vanilla example is most legible. Reading the proposal, I have no idea what this "Signal" word adds other than complexity.
Less important: I really, really, really, really, am reluctant to consider that is something that needs standardizing.
Disclaimer: I don't have 100% context if this concept is _really_ the same across all these frameworks.
But frankly, I doubt it, if it was that similar, why are there at least a dozen frameworks with their own version?*
Also, I've lived through React, Redux, effects, and so on becoming Fundamentally Necessary, until they're not. Usually when it actually is fundamental you can smell it outside of JS as well. (ex. promises <=> futures). I've seen 1000 Rx frameworks come into style and go out of style, from JS to Objective-C to Kotlin to Dart. Let them live vibrant lives, don't tie them to the browser.
* I know that's begging the question, put more complex: if they are that similar and that set in stone that its at a good point to codify, why are there enough differences between them to enable a dozen different frameworks that are actively used?
> Disclaimer: I don't have 100% context if this concept is _really_ the same across all these frameworks.
Very nearly[1] every current framework now has a similar concept, all with the same general foundation: some unit of atomic state, some mechanism to subscribe to its state changes by reading it in a tracking context, and some internal logic to notify those subscriptions when the state is written. They all have a varied set of related abstractions that build upon those fundamental concepts, which…
> But frankly, I doubt it, if it was that similar, why are there at least a dozen frameworks with their own version?*
… is part of what distinguishes each such framework. Another part is that state management and derived computations are only part of what any of the frameworks do. They all have, beyond their diverse set of complementary reactive abstractions, also their own varied takes on templating, rendering models, data fetching, routing, composition, integration with other tools and systems.
Moreover, this foundational similarity between the frameworks is relatively recent. It’s a convergence around a successful set of basic abstractions which in many ways comes from each framework learning from the others. And that convergence is so pervasive that it’s motivating the standardization effort.
This especially stands out because the reference polyfill is derived from Angular’s implementation, which only very recently embraced the concept. From reading the PR notes, the implementation has only minor changes to satisfy the proposed spec. That’s because Angular’s own implementation, being so recent, internalizes many lessons learned from prior art which also inform the thinking behind the spec itself.
This is very much like the analogy to Promises, which saw a similar sea change in convergence around a set of basic foundational concepts after years of competing approaches eventually drifting in that same direction.
[1]: Most notably, React is unique in that it has largely avoided signals while many frameworks inspired by it have gravitated towards them.
Explicit vs implicit dependencies (useEffect vs Signal.Computed/effect) and the fact that signals in contrast to useState can be used outside of react context which I assume is a good thing.
I personally mostly prefer more explicit handling of "observable values" where function signatures show which signals/observables are used inside them.
They’re very similar, and you can definitely squint right to see them as fundamentally the same concept… if while squinting you also see a React component itself as a reactive effect. Which is all technically correct (the best kind), but generally not what people mean when they’re talking about signals in practical terms.
Signals are fine grained reactivity. React is coarse grained reactivity. Legend-state adds signals to React and I'd recommend it over Redux/zustand which we used to use.
> why are there enough differences between them to enable a dozen different frameworks that are actively used?
Because they are not in the standard library of the language? Because they all arrived at the solution at different times and had to adapt the solution to the various idiosyncratic ways of each library? Because this happen in each and every language: people have similar, but different solutions until they are built into the language/standard library?
> Most importantly: OP is right re: vanilla example is most legible. Reading the proposal, I have no idea what this "Signal" word adds other than complexity.
The aim is to run computations or side effects only when the values they depend on change.
This is a perfectly normal scenario and you don't want to update all data the UI of a full application tree whenever something changes.
DOM updates are the most popular example but it could really be anything.
Of course in simple examples (e.g. this counter) you might not care about recomputing every value and recreating every part of the DOM (apart from issues with focus and other details).
But in general, some form of this logic is needed by every JS-heavy reactive web app.
Regardless of the implementation, when it comes to that, I'm not sure I see the benefit of building this into the language either.
> But frankly, I doubt it, if it was that similar, why are there at least a dozen frameworks with their own version?*
Welcome to the fashion cycle that is JavaScript. Given a few years, every old concept gets reinvented and then you have half a dozen frameworks that are basically the same but sufficiently different so that you have to relearn the APIs. This is what I think standardization helps circumvent
A good standard library prevents fragmentation on ideas that are good enough to keep getting reinvented
> But frankly, I doubt it, if it was that similar, why are there at least a dozen frameworks with their own version?*
To answer this specifically: signals are a relatively low-level part of most frameworks. Once you've got signals, there are still plenty of other decisions to make as to how a specific framework works that differentiate one framework from another. For example:
* Different frameworks expose the underlying mechanism of signals in different ways. SolidJS explicitly separates out the read and write parts of a signal in order to encourage one-way data flow, whereas Vue exposes signals as a mutable object using proxies to give a more conventional, imperative API.
* Different frameworks will tie signals to different parts of the rendering process. For example, typically, signals have been used to decide when you rerender a component - Vue and Preact (mostly) work like this. That way, you still have render functions and a vdom of some description. On the other hand frameworks like SolidJS and Svelte use a compiler to tie signal updates directly to instructions to update parts of the DOM.
* Different frameworks make different choices about what additional features are included in the framework, completely outside of the signal mechanism. Angular brings its own services and DI mechanism, Vue bundles a tool for isolating component styles, SolidJS strips most parts away but is designed to produce very efficient code, etc.
So in total, even if all of the frameworks shared the same signals mechanism, they'd all still behave very differently and offer very different approaches to using them.
As to why different frameworks use different implementations as opposed to standardising on a single library, as I understand it this has a lot to do with how signals are currently often tied to the component lifestyle of different frameworks. Because signals require circular references, it's very difficult to build them in such a way that they will be garbage collected at the right time, at least in Javascript. A lot of frameworks therefore tie the listener lifecycle to the lifecycle of the components themselves, which means that the listeners can be destroyed when the component is no longer in use. This requires signals to typically be relatively deeply integrated into the framework.
They reference this a bit in the proposal, and mention both the GC side of things (which is easier to fix if you're adding a new primitive directly to the engine), and providing lots of hooks to make it possible to tie subscriptions to the component lifecycle. So I suspect they're thinking about this issue, although I also suspect it'll be a fairly hard problem.
Fwiw, as someone who has worked a lot with signals, I am also somewhat sceptical of this proposal. Signals are very powerful and useful, but I'm not sure if they, by themselves, represent enough of a fundamental mechanism to be worth embedding into the language.
> ...but since signals are something that is used in a variety of frameworks...
...common usage is not really a justification for putting it into the language standard though. Glancing over the readme I'm not seeing anything that would require changes to the language syntax and can't be implemented in a regular 3rd-party library.
In a couple of years, another fancy technique will make the rounds and make signals look stupid, and then we are left with more legacy baggage in the language that can't be removed because of backwards compatibility (let C++ be a warning).
From what I understand, a few/many of the big frameworks are converging on signals, and another commenter said that Qt had signals in the 90s https://news.ycombinator.com/item?id=39891883. I understand your worries, and I would appreciate some wisdom from non-JS UI people, espcially if they have 20+ years of experience with them.
Every framework is moving to signals, apart from React and I'd say if this became a standard even they will. This is like Promise. It's a sensible shared concept.
"In Preact, when a signal is passed down through a tree as props or context, we're only passing around references to the signal. The signal can be updated without re-rendering any components, since components see the signal and not its value. This lets us skip all of the expensive rendering work and jump immediately to any components in the tree that actually access the signal's .value property."
"Signals have a second important characteristic, which is that they track when their value is accessed and when it is updated. In Preact, accessing a signal's .value property from within a component automatically re-renders the component when that signal's value changes."
I think it makes a lot more sense in a context like that.
You don’t need to pass a Preact signal as a prop to get reactivity. If you’re using Preact, signal references will make your component reactive by default, and if you’re using React you can introduce reactivity by way of the useSignals hook or a Babel plugin. (1)
React signals have become my go to state management tool. So easy to use and very flexible.
I’m also a fan of local state, but there are some cases where it makes sense for a bit of global state - mainly user context.
However you can use signals for local state as well and they work amazingly. Being able to assign a new value to a signal without having to go though a setter is a way cleaner pattern, in my opinion.
The other use cause is for communication between micro frontends. It’s so nice to just be able to import/export a signal and get its reactivity. Before them, I would create a pub/sub pattern and that’s just not as clean.
Since reactivity is not baked into Javascript. Adding reactivity is going to add abstraction overhead. It's meant to be used if it's needed. Not necessarily a default way to work with state.
In my experience, the big benefit is the ability to make reactive state modular. In an imperative style, additional state is needed to track changes. Modularity is achieved using abstraction. Only use when needed.
> Sounds like they want premature memoization
It's a balance to present a simple example that is applicable. Cases where reactivity have a clear benefit tend to be more complex examples. Which is more difficult to demonstrate than a simple, less applicable example.
I think there is room for improvement in how we explain this. The problems aren’t really visible in this small sample and comes up more for bigger things. PRs welcome.
Perhaps mentioning the tradeoffs between a simple easy to explain example vs a more obvious comprehensive example. With links to more complex code bases? With a before & after?
I think it’s worth avoiding a change in design when you pass some threshold of complexity. The vanilla JS approach has some scaling limitation in term of state graph complexity, and the problem isn’t the ergonomics above and below the threshold, but discontinuous change in ergonomics when you cross that threshold
Indeed at a certain scale the "easy" approach ends up becoming a mess. A simple counter isn't complex enough but this is a great idea and would be a positive for the language.
> Am I the only one that thinks the vanilla js example is actually easier to read and work with?
Even if that were true for this example, the signal-based model grows linearly in complexity and overhead with the number of derived nodes. The callback-based version is super linear in complexity because you have an undefined/ unpredictable evaluation order for callbacks producing a combinatorial explosion of possible side effect traces. It also scales less efficiently because you could potentially run side effects and updates multiple times, where the signal version makes additional guarantees that can prevent this.
I much prefer the explicit get/set methods. MobX I think used the magic approach as did svelte and I believe svelte have realized it's a mistake. It makes it harder to reason about the code, better to be explicit.
I am not sure what the fuss is about, honestly. Elon didn't storm off the set, he answered all the questions asked, Don seemed to pick up on some negative vibes but of course he would since they're having an intense discussion where there were several points at which they had to simply disagree and move to the next subject.
Don irritated the guy who invited him into a partnership and then that partnership was rescinded. Don himself prefaces the clip by saying his show wasn't cancelled by X. Seems pretty in line with what Elon said in the interview about how everyone's free to post, but that doesn't mean X will promote it. X decided not to promote Don Lemon's show.
I think we've found Naughty Old Mr Car's HN account.
Also:
> Don also raised his voice
Oh, no! How could he?!
I’ve long thought that it would be useful, or at least extremely funny, for Jeremy Paxman to spend a bit of time in the US; it would be fascinating to see how the likes of Trump and Musk would cope with an actual adversarial interview. They’d probably _melt_; US interviewers are extremely deferential to the rich and powerful by contrast.
It's just another case of "spaceman bad". Even his recent comment about Microsoft requiring a user account for installing Windows seemingly made some people make a 180 just to disagree with him.
When I shopped for a new watch two years ago, I was looking for a simple non-smart, easy to use, robust watch that didn't have a ton of features. I ended up getting a Freestyle watch and I have been very satisfied with it.
It tells the time and tells me time elapsed, the light isn't obnoxiously bright at night, I'm not fiddling with the interface, I'm not getting pinged, my data isn't being collected, I'm not afraid of breaking it or soaking it, I'm not charging it.
I agree, Java is wordy, ritualistic, and prone to overcomplication. Build/run/write loops are slow and painful, and frameworks and toolkits try to do so much that they inevitably get in the way.
Well, it's written in the opinion section in newsweek. Also, there are quite a few links throughout the article, but they may be easy to miss because the only indication they are links are the red line underneath, otherwise it looks like regular text.
I personally thought it was well-written, and I find it relieving to hear someone admit the many flaws in the execution of the covid response.
For an article titled "It's Time for the Scientific Community to Admit We Were Wrong About COVID and It Cost Lives", it needs to clearly list out exactly what the scientific community got wrong. It needs strong and convincing evidence detailing each and every single claim. It needs to show us the math on how many lives would have been saved if we made difference decisions.
Rather, the entire article is based on a bunch of unconvincing links. For example, the author linked to news story[0] about about how experts signed a letter calling for school closures. Yet, the author did not provide any evidence for why this decision was wrong.
The vast majority of the article is basically just saying the science community got it wrong because they were wrong.
I can be convinced that the science community got some stuff wrong. I absolutely can be convinced. But this article isn't it.
For those interested in some good history of the punk movement in East Berlin, I recommend the book Burning Down the Haus by Tom Mohr. Really gives a window into the courage, creativity, and motivation of the youth, and their oppression by the Stasi.
While I agree that's a big part of the problem that needs to be solved. I don't think any one thing is going to tackle such a complex multi-faceted issue. For example the book Evicted by Matthew Desmond was a good read and told the story of some families on the edge of homelessness, facing eviction due to various reasons such as drug addiction, low income, etc. These problems won't go away with addressing housing regulations, although more affordable housing availability could certainly help.
You can get a good grasp of the size and complexity of HMIS data by perusing the HUD's HMIS Data Dictionary [0]. It shows the business objects and their fields, allowing you to get a rough ERD-like idea of how the HUD views HMIS data. Many HMIS systems, from what I've heard and seen (I've only done a little work in the space), don't use this as a data model internally though or if they do it's not exposed that way through their APIs if they even have APIs that they're willing to let you use. Presumably though, they must have to report it that way to the HUD for funding.
I agree, it would be great to have a fresh, pragmatic take on this data, and I'll be reading more about that Built for Zero framework.
One Center of Care (CoC) I talked to talked about their old HMIS system from a private company who charged something like $70K/yr (iirc) to keep using their software. That seemed excessive to me, but they didn't seem to mind too much. Migrating off would have been extremely difficult anyway. Their bigger stated need was the ability to easily send data from their system to other CoC systems in the area as the people they were helping moved around or were transferred to more relevant services. Each time those people encountered another CoC, basically the intake process would have to be started over again at the new location, which was a drain on both the receivers and givers of services.
We tried a couple attempts at making a sort of hub for pulling and pushing data to the various systems, but only one entry system company was very supportive, while we had to tread carefully around some of the HMIS system makers who seemed very protective and unwilling to expose or share their APIs if they had any. One big challenge was finding a single unique identifier for each person across systems, which is why a local by-name list like the article mentions is very intriguing.
"The latest conditions on mountain bike and hiking trails are being shared inside communities like Reddit but not on the web."
I just wanted to mention that a friend of mine made an app for user-reported trail conditions that might be worth taking a look at:
https://trekko.app/
I wonder how much time has been wasted by devs who get the order wrong. I know I've been guilty of it so many times. It's really frustrating, but I like that someone took the time to write down which order goes for which software- this is a nice reference.
- "The setup is noise and boilerplate heavy." Actually the signals example looks just as noisy and boilerplate heavy to me. And it introduces new boilerplate concepts which are hard for beginners to understand.
- "If the counter changes but parity does not (e.g. counter goes from 2 to 4), then we do unnecessary computation of the parity and unnecessary rendering." - Sounds like they want premature memoization.
- "What if another part of our UI just wants to render when the counter updates?" Then I agree the strawman example is probably not what you want. At that point you might want to handle the state using signals, event handling, central state store (e.g. redux-like tools), or some other method. I think this is also what they meant by "The counter state is tightly coupled to the rendering system."? Some of this document feels a little repetitive.
- "What if another part of our UI is dependent on isEven or parity alone?" Sure, you could change your entire approach because of this if that's a really central part of your app, but most often it's not. And "The render function, which is only dependent on parity must instead "know" that it actually needs to subscribe to counter." is often not an unreasonable obligation. I mean, that's one of the nice things about pure computed functions- it's easy to spot their inputs.