I’ve been a ClojureScript developer for many years, and we (as a community) have had the pleasure of the reagent library. From the early days of react, it was basically a signals-like library with an ergonomic api. Signals are almost a requirement in ClojureScript, assume you want to do any sort of REPL-based workflows.
In the time that I’ve been using ClojureScript, the whole react community has switched to hooks. I find it so baffling - requiring your entire data structure to be contained within your component lifecycle? The thousands of calls to individual “useState” to track the same number of state items? The fact that every function is closing over historical state unless you manually tell it to change every time the variable does — and then this last bit in combination with non-linear deep equality comparisons?
I recently have been in the process of switching phrasing.app from reagent atoms to preact/signals for performance reasons (and as part of a longer horizon move from react to preact) and I have to say it’s been fantastic. Maybe 50 lines of code to replicate reagent atoms with preact/signals, all the benefits, and much much faster.
Very happy that there is react-like library so devoted to first class support of signals.
Calling hooks "traditional" in relation to signals seems fine, lest that word be relegated in time to whatever you in particular care about at this juncture.
I'm sure some 80 year olds before us thought the same of us using that term.
> Calling hooks "traditional" in relation to signals seems fine, lest that word be relegated in time to whatever you in particular care about at this juncture.
"Traditional" hooks are 6 years old. I think it's to early to call it traditional. Given that literally everyone else looked at this "tradition" and chose differently. Namely, signals.
Signals were popularized by SolidJS, but SolidJS's Ryan Carniato will keep telling you that what everyone calls signals now has its roots in libs like KnockoutJS from 2010. And everyone has been busy using signals for the past three years.
Given the amount of frameworks that implement signals today (including monsters like Angular), it's React who's not following tradition.
I guess the bigger point is that we could offer the same charity to the OP that I'm lending you here in your use of the wrong "to".
There are meaningful critiques elsewhere in the comments about the piece with some semblance of charitable interpretation. We've elected instead to manufacture a snide, pedestrian haughtiness about wording.
These "signals" don't seem like FRP signals (which are time-varying values, like a "signal" to an EE but not limited to real numbers) but more like Qt signals or MVC "model has updated" notifications.
Yes, you're right. I think I read some later FRP papers that used the term "signal" instead of "behavior", and I thought that was what you were talking about. FRP "events" are kind of like the kind of signals being discussed here, but there are still big differences.
> FRP "events" are kind of like the kind of signals being discussed here, but there are still big differences.
I don't think the differences are that significant, JS signals are basically `latch(frp-event-stream)`, eg. FRP events yield edge-triggered systems and JS signals yield level-triggered systems, and latches transform edge-triggered to level triggered.
I understand why people can see JS signals as FRP behaviours though, as both have defined values at all times t, but the evaluation model is more like FRP events (push-based reactivity), so I think edge vs. level triggered is the real difference, and these are interconvertible without loss of information.
IIRC, the FRP literature calls both of them "signals" as a general category, just two different types.
This article is completely backwards. We do not want to manually manage what-gets-refreshed-when. The whole point of React was to react to state changes automatically.
One upside of this approach is that the only subtree that needs to be re-rendered is the specific element whose state got mutated.
Another upside of this approach is that the code doing the mutation is very close to the actual UI element that triggered it. Of course, this rapidly turns into a downside as the size of the codebase grows...
It's not automatic though. Theres function calls() and you must createSignal for every derived value.
When you ignore the performance aspect, React has the objectively least amount of boilerplate for reactivity.
The question, that I genuinely don't know the answer to, is a) whether the performance improvement is worth it, and b) whether that's still the case after the compiler.
React is the most inefficient in terms of performance (after imgui); it recomputes everything on every event like mousemove. It was probably made for simple forms with tens of variables.
> The whole point of React was to react to state changes automatically.
It isn't, given that React requires you to provide an array of dependencies when using useMemo and useEffect. The point of React is to automatically update the DOM when a virtual DOM tree gets modified.
Yeah, I'm interested to learn more about signals, but the article seems to miss the whole point of brute-force state updates -- less cognitive complexity.
I can understand the point, and the fact that in virtual dom implementations you in general specify the state -> view mapping and don't need to distinguish between first render and updates.
However, practically and personally I find working with signals way simpler, and that is even without considering the debugging experience.
I don't like the style of code in the article, with weird functions like "useState" and "useSignal". Looks ugly to me.
Also, it seems that with signals you must use immutable values only. Imagine if you have, let's say, a text document model, and you need to create a new copy every time the user types a letter. That's not going to work fast. And there will be no granular updates, because the signal only tracks the value (whole document), not its components (a single paragraph).
Also the article mentions rarely used preact and doesn't mention Vue. Vue can track mutable object graphs (for example, text document model). But Vue uses JS proxies that have lot of own issues (cannot access private fields, having to deal with mixing proxies and real values when adding them to a set, browser APIs break when a proxy is passed).
Also I don't like that React requires installing Node and compilation tools, this is a waste of time when making a quick prototype. Vue can be used without Node.
Immutable does not mean you have to copy the whole structure. You can store only the changes. This is how immutable data structures work in functional languages such as Haskell.
I know about different data structures but they are all more complicated than mutable data structures. For example, if you "store only changes", it will take more time to access the data, and you need to flatten your changes once in a while.
Also, for nested data structures you need to either do path copying, or use "modification boxes" [1].
> Also I don't like that React requires installing Node and compilation tools, this is a waste of time when making a quick prototype. Vue can be used without Node.
You lose a lot though! You don't get minification and tree shaking, single file components, or hot module reloading. In practice, HMR outweighs the cost of setting up a Node/Deno/Bun environment.
> it seems that with signals you must use immutable values only. Imagine if you have, let's say, a text document model, and you need to create a new copy every time the user types a letter.
Too small. Imagine if you have a 2GB mutable file. Each keystroke in the middle of the file has to move the whole 2nd gigabyte backward.
Immutable representations of large buffers aren't flat arrays. The most obvious abstract semantics that we're depending on here is a map from indexes to byte segments. Immutably rearranging indexes can be made very fast.
> Imagine if you have, let's say, a text document model, and you need to create a new copy every time the user types a letter. That's not going to work fast. And there will be no granular updates, because the signal only tracks the value (whole document), not its components (a single paragraph).
Funnily enough, back when storage was slow enough that saving a text document involved a progress bar, one of the big advantages Word had over competitors was lightning fast saves, which they accomplished by switching to an immutable data structure. That is, while others would update and save the data, Word would leave it untouched and just append a patch, skipping lots of time spent rewriting things that an in-place edit would shift.
The “copy everything” mental model of immutable programming is really about as wrong as a “rewrite everything” mental model of mutable programming. If it happens it’s bad code or a degenerate case, not the way it’s supposed to happen or usually happens. Correctly anticipating performance requires getting into a lot more detail.
Preact signals are far superior to other state management patterns for react, but don't use a ContextProvider as shown is this article, pass the signals as props instead.
passing the value on and on as you go down a component chain. Context lets you avoid all that.
Where signals offer an important benefit is in localizing re-rendering. If you use context with regular, non-signal values the entire VDOM tree has to be re-rendered when the context value changes (because there's no way to know what code depends on its value). With signals you can change the value of a signal in the context without changing the context itself, meaning that the only part of the VDOM tree that gets re-rendered is the one using the signal.
With performance considerations out of the way context becomes a really interesting way to provide component composition without having to specify the same props over and over as you make your way down a chain.
ContextProvider is for when you have some data that any random component might need, and you don’t want to clutter up the system by passing it through every component (this is called “prop drilling”). Or maybe you can’t pass it through some container component that you don’t control.
That consideration is orthogonal to what the useful data is. It could be a signal, or not. In other words, signals are not an alternative to context.
It's hilarious that the Observable pattern that MVC and Qt are organized around has become "a game-changer for large applications where context is used to distribute state across many components" this year. MVC is maybe 50 years old now?
Thanks! But what's being described there as "observable" is something other than the Observer pattern that "observable" comes from, which is closer to what it calls "signals". https://en.wikipedia.org/wiki/Observer_pattern
I think they are the same, Observable or Observer pattern both require a manual subscription, and publishing is done through a callback offering the latest in a stream of values.
See https://github.com/tc39/proposal-signals?tab=readme-ov-file#... for more on how signals differ.
Mainly no manual bookkeeping and the signal is kind of like a handle, and it allows lazy/computed signals that reference other signals and do the change tracking
It remains a bit of a mystery to me that applications seem to grow further and further from traditional game techniques. Both in how state is propagated through the application, and in how it is represented.
Like, I understand why people aren't going full 3d game simulation style for applications. But I don't understand why things are as divergent as they are?
Just general code organization is all I really mean. Typically an explicit main loop where you go through all of the stuff that the game cares about. With the proliferation of frontend frameworks, you often don't even have an analogue of a main method, anymore.
I have similar questions on asset management. But I think that one makes a bit more sense, though? Game studios often have people that are explicitly owners of some of the media. And that media is not, necessarily, directly annotated by the developer. Instead, the media will be consumed as part of the build step with an edit cycle that is removed from the developer's workflow.
Game engines do not own their main loop anymore either. A lot of host platforms are event based now, including ios, macos, android, web, and to a lesser extend windows.
This means you tell the host ‘call this function every X milliseconds’. This also means you do not have full control of a main loop and you get event calls.
It is moving more towards event based like in web:) it prevents lockups
But even in that scenario, you have a main function that is called under a deadline, no? You don't necessarily set the heart beat, but you do have a specific method that is called over and over.
That is, yes, I shouldn't have mentioned the "main" method. You don't even necessarily want to own the main loop, if you will. The logic that you do every iteration of the loop, though, is something you will almost certainly see in most games. I don't think I've seen many (any?) applications where you could identify the main loop logic. Is typically spread out over god knows how many locations in the code.
I can't say as I'm not familiar with Preact signals, but I do know the Angular team brought on the author of SolidJS to implement signals in Angular. Though I think I remember reading at some point that Preact signals were essentially directly ported from SolidJS, so they're likely similar if nothing else. (Someone correct me if I'm wrong)
Neither of those things are true to my knowledge... I can find absolutely no evidence of Ryan Carniato being involved with Angular Signals (nor did I hear anything whilst they were being developed) and Preact Signals certainly weren't a port of Solid. Same name but internals share nothing and if anything, Preact's signals most closely resemble Vue's refs in API (though completely different internals -- there was no porting).
My mind immediately jumped to the recent Chat Control drama after reading the title.
One way I frame such issues in my mind is that it's about who has control (and how much) over what is rendered and when on the screen I carry in my pocket. To some extent it is now Signal when I use the app. One day it might be my State instead.
The biggest downside of knockout is that it parses the template from the dom, and the template is rendered as dom until first execution. Then that it eval it's bindings. I suppose tko should help with those issues but seems kinda dead.
Knockout reactivity primitives are also a lot more naive then modern signals implementations.
It's a collaboration between multiple library maintainers, attempting to standardize and unify their shared change-tracking approach.