If I'm understanding correctly, this is binding event handlers "just in time" instead of when a component initializes. Isn't that just a tradeoff between working the CPU at load time vs. working the CPU on user interaction?
This doesn't seem like a great tradeoff to me. Sure, maybe you save time during component initialization, but while that is happening the user is digesting the information anyway. Then once they make their decision to act, there's no extra delay to produce the next state. However, with a "just in time" event binding, now the user has to wait (slightly) longer after they've already made their decision, which seems worse.
Haven't dug too deep, but my understanding is that this doesn't bind event handlers just in time, but instead sets up event delegation from a tiny blocking bootstrapping script, to attach a top-level event handler that catches all events as soon as the first chunk of HTML streams in.
In addition, it sets up an intersection observer. Then depending on when an event happens, it might require downloading that one event handler piecemeal if the event occurred early enough during page load, or if the event is late enough, the action happens instantaneously because the intersection observer already downloaded the handler in anticipation that the user would interact with the element, it being visible and all.
The trade-off is that the download of every other JS thing effectively gets deferred due to fragmentation of how JS gets loaded in the page, but the cleverness of the trade-off is that in typical scenarios, most of that deferred code is not going to be activated by the user in the first place (or at least not in quick succession so as to overload network).
Yeah... This screams of people who never had bad networks. If you are going to add interactivity as a JIT thing, what happens when the user has a shitty connection? You give the impression that the page loaded to the user, but it didn't really load. This is just increasing by a lot the amount of connections the user will have to make. Its _more_ overhead with a bunch of http calls.
The goal here, as I understand it, is to make sites that can be built using fully interactive tooling whose main purpose is to be read, so fast initial readability is paramount and the alternative to JIT interactivity would be a page transition so the network problem doesn't actually make anything worse.
It may not be the best possible set of trade-offs for any particular application but it seems like a set of trade-offs worth exploring.
it's all to appease the Google black box. UX always takes a back seat to SEO. Because if there are no users, then no one to irritate with bad UX in the first place. If it weren't for SEO we would have all dropped SSR+hydration long ago. Absolutely no one likes unifying URLs and content and all that shit on two sides of a single app.
This doesn't seem like a great tradeoff to me. Sure, maybe you save time during component initialization, but while that is happening the user is digesting the information anyway. Then once they make their decision to act, there's no extra delay to produce the next state. However, with a "just in time" event binding, now the user has to wait (slightly) longer after they've already made their decision, which seems worse.