Nice work! I have a no-dependencies project with somewhat similar goals in frameable/el[0], which takes more inspiration from Vue in its interface.
WebComponents give us so much, but just not quite enough to be usable on their own without a little bit more on top. Reactivity with observable store and templating with event binding are the big missing pieces.
The main reason is that they're too low-level to use directly.
They do a lot, but stop just short of being useful without something of a framework on top. I tried hard to use them directly, but found that it was untenable without sensible template interpolation, and without helpers for event binding.
Here's my shot at the smallest possible "framework" atop Web Components that make them workable (and even enjoyable) as an application developer:
It's just ~10 SLOC, admittedly dense, but which make a world of difference in terms of usability. With that in place, now you can write markup in a style not too dissimilar from React or Vue, like...
<button @click="increment">${this.count}</button>
Whereas without, you need to bind your own events and manage your own template interpolation with sanitizing, handling conditionals, loops, and all the rest.
If we could get something in this direction adopted into the spec, it would open up a lot of use cases for Web Components that it just can't address as-is.
Template Instantiation is suppose to be the answer at least in part, but it’s been held up since 2017[0]
The bigger problem is that the web platform stakeholders (IE the committees that industry influence and the browser makers) simply didn’t listen at all to what regular developers have been saying about what they want from the web platform w/r/t better built in platform provided primitives. It’s seems to me like web components have largely not taken lessons of component based development as everyone has come to expect.
It’s still a weird imperative API with no rendering or data binding primitives
You don't need reactive data binding. You can simply watch for HTMLElement attribute changes and invoke a render method whenever that occurs. It helps to improve your app architecture if you do this. Reactive rendering can be a foot gun and often leads to unnecessary double or triple rendering. It's better to be able to control the rendering explicitly from a single binding which is exactly what HTMLElement offers; you just need to adhere to good architectural principles.
Many modern devs are too deeply involved in the React cult to see past its many over-engineered and flawed abstractions and struggle to see the simpler approach which necessitates adhering to some basic but highly beneficial architectural constraints.
See that's weird, because the parent comments are about using in HTML template binding, and you seem to have suggested instead that a React-like render function model would be better, while simultaneously bashing React.
I am no fan of React, but this comment confuses me.
I don't have a problem with the idea of making each component have a render function which can be called whenever you need to render or re-render the component and I concede that there is some elegance in being able to construct a complex HTML component using a single template string. But you don't need React to do this. This idea of rendering a component whenever its internal state changed existed long before React.
React didn't even invent the idea of reactive components; that was Angular 1. I honestly don't know why React became so popular initially. After Angular 1, there was Google's Polymer which was far more elegant and closer to native Web Components (and fixed many of Angular 1's flaws) - I suspect it's because some devs didn't like that you had to use some polyfills for certain non-Chrome browsers.
Anyway now we have Web Components which work consistently without polyfills on all browsers so I don't see any reason not to use plain Web Components or use something more lightweight such as Lit elements.
Just like how plenty of people see reasons to start wars and waste tax payer money on bureaucracy. Or why billions of people choose to join one religion instead of another... All logical right?
How do you explain different religions being very popular and yet contradicting each other on many critical points? They can't both be right if they contradict each other yet they may both be hugely popular...
In my experience watching for attribute changes is very fast; I suspect in part because you need to explicitly list out the attributes which you're watching for changes and it only triggers the attributeChangedCallback lifecycle hook if one of those specified attributes changes.
React renders aren't always cheap. It depends on the complexity of the component and you don't have as much control over the rendering. I've worked on some React apps which were so slow due to unnecessary re-renders that my new computer would often freeze during development. I was able to fix it by refactoring the code to be more specific with useState() (to watch individual properties instead of the entire object) but this flexibility that useState provides is a React footgun IMO. With HTMLElement, you're forced to think in terms of individual attributes anyway and these are primitive types like strings, booleans or numbers so there is no room for committing such atrocities.
>In my experience watching for attribute changes is very fast
Watching for attributes changing is not the whole picture. Why did the attribute change in the first place? Probably because you're doing stuff in terms of DOM attributes, which is slow - not individually, but in aggregate. Death by a thousand papercuts.
> I've worked on some React apps which were so slow due to unnecessary re-renders that my new computer would often freeze during development
I haven't managed to achieve that. If your computer freezes, that almost certainly means you're running out of RAM, not that your CPU is busy. I admit that the React "development build" overhead is considerable.
> I was able to fix it by refactoring the code to be more specific with useState() (to watch individual properties instead of the entire object) but this flexibility that useState provides is a React footgun IMO.
I admit that React has some footguns, but I don't see how the reactive model can be implemented entirely without them. It's a price I'm willing to pay, because it makes most things far easier to reason about. 90% of my components have no state at all. Of those that do, the vast majority have no complex state.
You don't really need all that stuff. Sanitization is straight
forward to implement and only required for user generated strings (since you want to make it HTML-safe). It could be argued that automatically sanitizing everything including already safe data types like numbers and system-generated content adds an unnecessary performance overhead for certain projects.
As for events, binding is really very easy to do and it's already localised to the component so managing them is trivial.
Loops are also trivial; you can simply use Array.prototype.map function to return a bunch of strings which you can incorporate directly into the main component's template string. In any case, you can always use the native document.createElement and appendChild functions to create elements within the component and add them to its DOM or shadow DOM.
I've built some complex apps with plain HTMLElement as a base class for all my components and found is much simpler than React without any unexpected weirdness and using fewer abstract technical concepts. Code was much more readable and maintainable. I didn't even need a bundler thanks to modern async and defer attributes of script tags among others.
I think the reason why people are using React still is just marketing, hype and inertia. The job market which is gatekept by non-technical recruiters demands React. It's all non-tech people making the big decisions based on buzzwords that they don't understand.
I would not say it's easy. Considering your adversaries are very motivated to do XSS and the web platform is very complicated.
> It could be argued that automatically sanitizing everything including already safe data types like numbers and system-generated content adds an unnecessary performance overhead for certain projects.
I don't think there's a substantial performance loss from doing a type check on a value to see that it's a number, and then putting it verbatim into the output (within your sanitization code).
I don't know what "system generated content" is, and I'd argue that neither does a framework. Which means the far safer route is to assume it came from a user by default and force the dev to confirm that it's not from the user.
> Loops are also trivial; you can simply use Array.prototype.map function to return a bunch of strings which you can incorporate directly into the main component's template string
Combined with the "it's fine" mentality on data sanitization, it's concerning that we're using the term "string" in relation to building DOM nodes here. I hope we aren't talking about generating HTML as strings, combined with default-trusted application data that in most applications, does in fact come from the user, even if you might consider that user trusted (because it's Dave from Accounting, and not LeetHacker99 from Reddit).
By "system generated content" I meant content which is not derived from potentially unsafe user input. For example, if your front end receives a JSON object from your own back end which was generated by your back end and contains numbers, booleans and enums (from a constrained set of strings) and it is properly validated before insertion into your DB, such data poses no risk to your front end in terms of XSS. That said, if you want to make your system fool-proof and future-proof, you could escape HTML tags in all your string data before incorporating it into a components' template string as a principle; such function is trivial to implement.
The main risk of XSS is when you inject some unescaped user-generated string into a template and then set that whole template as your component's innerHTML... All I want to point out is that not every piece of data is a custom user-generated string. Numbers, booleans don't need to be escaped. Error messages generated by your system don't need to be escaped either. Enum strings (which are validated at insertion in the DB) also don't really need to be escaped but I would probably escape anyway in case of future developer mistake (improper validation).
I agree that the automatic sanitization which React does is probably not a huge performance cost for the typical app (it's probably worth the cost in the vast majority cases) but it depends on how much data your front end is rendering and how often it re-renders (e.g. real time games use case).
> and it is properly validated before insertion into your DB, such data poses no risk to your front end in terms of XSS
This is making a lot of assumptions. Just because the data was acceptable in a database table does not mean it doesn't pose an XSS risk.
Bear in mind, in other branches of this discussion we're talking about using DOM text APIs to insert. Certainly that is a good, reliable way to avoid XSS, but you can consider that to be value sanitization just done for you by the browser. In the absence of that, advocating that "if it comes from the API it is safe" is a dangerous thing to advocate for.
The title "A world where <HTML> tag is not required for your web pages" might be perfectly valid to submit into your blog's CMS system, but that in no way means you can skip processing that in the frontend because "it is safe". Plenty of what you are saying is reasonable, but I think the topic requires a little more nuance in order to speak about the topic responsibly.
Agreed, this is the safe approach if you create elements using document.createElement().
For cases where you want to generate some HTML as strings to embed within your component's template string (e.g. in a React-like manner using Array.prototype.map), you would have to escape the variables provided by the back end in case they contain HTML tags which could be used as an XSS attack.
Although such sanitization function is trivial to implement... In my previous comment, I mentioned using document.createElement() as a fallback if in doubt. It's safe to create the elements with the DOM API and using the textContent property as you suggest. That's why I don't see sanitization as a strong excuse to avoid using plain Web Components.
I agree that sanitization isn't an excuse not to use web components. Only that brushing off sanitization as solved by web components is dangerous rhetoric.
Yeah, that's where I'm at, too. Authoring web components directly is too low level to be practical. You can easily end up reimplementing a ton of existing framework logic around rerenders, reactivity, event handlers, etc.
And there are libraries that handle all of this for you! But then you have tooling and non-standard syntax that puts you a lot closer to other JS frameworks.
And once you're in that space, the benefits of web components may or may not stack up against all the features and your team's pre-existing knowledge.
GP comment shows a simple example, sure. And obviously it’s overkill just to implement incrementing a displayed value. But trivializing the example doesn’t reduce the “framework” to being pointless.
2 men are having lunch, one complaints that he always has cheese on his sandwiches. The other says, just ask your wife for something else? He responds: I always make it myself.
Thanks for sharing, we need more libraries like this. Even htmx.org is 150kb library over what was a few lines of xhr + el.innerhtml=response.result 10 years ago.
Where is the next wave of tiny libs that can make the web feel responsive again?
I've always wondered why this notion is so popular (is it just because of what react does)? Wouldn't the native browser be expected to handle a DOM re-render much more efficiently than an entire managed JS framework running on the native browser and emulating the DOM? Maybe in 2013 browsers really really sucked at re-renders, but I have to wonder if the myriads of WebKit, Blink, Gecko developers are so inept at their jobs that a managed framework can somehow figure out what to re-render better than the native browser can.
And yes, I understand that when you program using one of these frameworks your explicitly pointing out what pieces of state will change and when, but in my professional experience, people just code whatever works and don't pay much attention to that. In the naive case, I feel like the browser would probably beat react or any other framework on re-renders every time. In the naive case where developers don't really disambiguate what state changes like they're supposed to. Are there any benchmarks or recent blogs/tech articles that dive into this?
I think the reason that the browser is so slow is that every time you mutate something, an attribute or add or remove an element, the browser rerenders immediately. And this is indeed slow AF. If you batched everything into a DocumentFragment or similar before attaching it to the DOM then it'd be fast. I don't know how you do that ergonomically though.
It's partially true. Layout and repaint are two separate rendering phases. Repaint happens asynchronously, as you point out. But layout (i.e. calculating heights, widths, etc of each box) is more complicated. If the browser can get away with it, it will batch potential layout changes until directly before the repaint - if you do ten DOM updates in a single tick, you'll get one layout calculation (followed by one repaint).
But if you mix updates and reads, the browser needs to recalculate the layout before the read occurs, otherwise the read may not be correct. For example, if you change the font size of an element and then read the element height, the browser will need to rerun layout calculation between those two points to make sure that the change in font size hasn't updated the element height in the meantime. If these reads and writes are all synchronous, then this forces the layout calculations to happen synchronously as well.
So if you do ten DOM updates interspersed with ten DOM reads in a single tick, you'll now get ten layout calculations (followed by one repaint).
This is called layout thrashing, and it's something that can typically be solved by using a modern framework, or by using a tool like fastdom which helps with batching reads and writes so that all reads always happen before all writes in a given tick.
> I think the reason that the browser is so slow is that every time you mutate something, an attribute or add or remove an element, the browser rerenders immediately.
Is it really immediately? I thought that was a myth.
I thought that, given toplevel function `foo()` which calls `bar()` which calls `baz()` which makes 25 modifications to the DOM, the DOM is only rerendered once when foo returns i.e. when control returns from usercode.
I do know that making changes to the DOM, when immediately entering a while(1) loop doesn't show any change to the DOM.
The browser will, as much as it can, catch together DOM changes and perform them all at once. So if `baz` looks like this:
for (let i=0; i<10; i++) {
elem.style.fontSize = i + 20 + 'px';
}
Then the browser will only recalculate the size of `elem` once, as you point out.
But if we read the state of the DOM, then the browser still needs to do all the layout calculations before it can do that read, so we break that batching effect. This is the infamous layout thrashing problem. So this would be an example of bad code:
for (let i=0; i<10; i++) {
elem.style.fontSize = i + 20 + 'px';
console.log(elem.offsetHeight);
}
Now, every time we read `offsetHeight`, the browser sees that it has a scheduled DOM modification to apply, so it has to apply that first, before it can return a correct value.
This is the reason that libraries like fastdom (https://github.com/wilsonpage/fastdom) exist - they help ensure that, in a given tick, all the reads happen first, followed by all the writes.
That said, I suspect even if you add a write followed by a read to your `while(1)` experiment, it still won't actually render anything, because painting is a separate phase of the rendering process, which always happens asynchronously. But that might not be true, and I'm on mobile and can't test it myself.
> Now, every time we read `offsetHeight`, the browser sees that it has a scheduled DOM modification to apply, so it has to apply that first, before it can return a correct value.
That makes perfect sense, except that I don't understand how using a shadow DOM helps in this specific case (A DOM write followed immediately by a DOM read).
Won't the shadow DOM have to perform the same calculations if you modify it and then immediately use a calculated value for the next modification?
I'm trying to understand how exactly a shadow DOM can perform the calculations after modifications faster than the real DOM can.
The shadow DOM doesn't help at all here, that's mainly about scope and isolation. The (in fairness confusingly named) virtual DOM helps by splitting up writes and reads.
The goal when updating the DOM is to do all the reads in one batch, followed by all the writes in a second batch, so that they never interleave, and so that the browser can be as asynchronous as possible. A virtual DOM is just one way of batching those writes together.
It works in two phases: first, you work through the component tree, and freely read anything you want from the DOM, but rather than make any updates, you instead build a new data structure (the VDOM), which is just an internal representation of what you want the DOM to look like at some point in the future. Then, you reconcile this VDOM structure with the real DOM by looking to see which attributes need to be updated and updating them. By doing this in two phases, you ensure that all the reads happen before all the writes.
There are other ways of doing this. SolidJS, for example, just applies all DOM mutations asynchronously (or at least, partially asynchronously, I think using microtasks), which avoids the need for a virtual DOM. I assume Svelte has some similar setup, but I'm less familiar with that framework. That's not to say that virtual DOM implementations aren't still useful, just that they are one solution with a specific set of tradeoffs - other solutions to layout thrashing exist. (And VDOMs have other benefits being just avoiding layout thrashing.)
So to answer your question: the virtual DOM helps because it separates reads and writes from each other. Reads happen on the real DOM, writes happen on the virtual DOM, and it's only at the end of a given tick that the virtual DOM is reconciled with the real DOM, and the real DOM is updated.
I'm gonna apologise in advance for being unusually obtuse this morning. I'm not trying to be contentious :-)
> So to answer your question: the virtual DOM helps because it separates reads and writes from each other. Reads happen on the real DOM, writes happen on the virtual DOM, and it's only at the end of a given tick that the virtual DOM is reconciled with the real DOM, and the real DOM is updated.
I still don't understand why this can't be done (or isn't currently done) by the browser engine on the real DOM.
I'm sticking to the example given: write $FOO to DOM causing $BAR, which is calculated from $FOO, to change to $BAZ.
Using a VDOM, if you're performing all the reads first, then the read gives you $BAR (the value prior to the change).
Doing it on the real DOM, the read will return $BAZ. Obviously $BAR is different from $BAZ, due to the writing of $FOO to the DOM.
If this is acceptable, then why can't the browser engine cache all the writes to the DOM and only perform them at the end of the given tick, while performing all the reads synchronously? You'll get the same result as using the VDOM anyway, but without the overhead.
No worries, I hope I'm not under/overexplaining something!
The answer here is the standard one though: if you write $FOO to DOM, then read $BAR, it has to return $BAZ because it always used to return $BAZ, and we can't have breaking changes. All of the APIs are designed around synchronously updating the DOM, because asynchronous execution wasn't really planned in at the beginning.
You could add new APIs that do asynchronous writes and synchronous reads, but I think in practice this isn't all that important for two reasons:
Firstly, it's already possible to separate reads from writes using microtasks and other existing APIs for forcing asynchronous execution. There's even a library (fastdom) that gives you a fairly easy API for separating reads and writes.
Secondly, there are other reasons to use a VDOM or some other DOM abstraction layer, and they usually have different tradeoffs. People will still use these abstractions, even if the layout thrashing issue were solved completely somehow. So practically, it's more useful to provide the low-level generic APIs (like microtasks) and let the different tools and frameworks use them in different ways. I think there's also not a big push for change here: the big frameworks are already handing this issue fine and don't need new APIs, and smaller sites or tools (including the micro-framework that was originally posted) are rarely so complicated that they need these sorts of solutions. So while this is a real footgun that people can run into, it's not possible to remove it without breaking existing websites, and it's fairly easy to avoid if you do run into it and it starts causing problems.
As I understand it, there's a couple of issues with naively rerendering DOM elements like this.
Firstly, the DOM is stateful, even in relatively simple cases, which means destroying and recreating a DOM node can lose information. The classic example is a text input: if you have a component with a text input, and you want to rerender that component, you need to make sure that the contents of the text input, the cursor position, any validation state, the focus, etc, are all the same as they were before the render. In React and other VDOM implementations, there is some sort of `reconcile` function that compares the virtual DOM to the real one, and makes only the changes necessary. So if there's an input field (that may or may not have text in it) and the CSS class has changed but nothing else, then the `reconcile` function can update that class in-place, rather than recreate it completely.
In frameworks which don't use a virtual DOM, like SolidJS or Svelte, rerendering is typically fine-grained from the start, in the sense that each change to state is mapped directly to a specific DOM mutation that changes only the relevant element. For example in SolidJS, if updating state would change the CSS class, then we can link those changes directly to the class attribute, rather than recreating the whole input field altogether.
The second issue that often comes with doing this sort of rerendering naively is layout thrashing. Rerendering is expensive in the browser not because it's hard to build a tree of DOM elements, but because it's hard to figure out the correct layout of those elements (i.e. given the contents, the padding, the surrounding elements, positioning, etc, how many pixels high will this div be?) As a result, if you make a change to the DOM, the browser typically won't update the DOM immediately, and instead batches changes together asynchronously so that the layout gets calculated less often.
However, if I mix reads and writes together (e.g. update an element class and then immediately read the element height), then I force the layout calculation to happen synchronously. Worse, if I'm doing reads and writes multiple times in the same tick of the Javascript engine, then the browser has to make changes, recalculate the layout, return the calculated value, then immediately throw all the information away as I update the DOM again somewhere else. This is called layout thrashing, and is usually what people are talking about when they talk about bad DOM performance.
The advantage of VDOM implementations like React is that they can update everything in one fell swoop - there is no thrashing because the DOM gets updated at most once per tick. All the reads are looking at the same DOM state, so things don't need to be recalculated every time. I'm not 100% sure how Svelte handles this issue, but in SolidJS, DOM updates happen as part of the `createRenderEffect` phase, which happens asynchronously after all DOM reads for a given tick have occurred.
OP's framework is deliberately designed to be super simple, and for basic problems will be completely fine, but it does run into both of the problems I mentioned. Because the whole component is rerendered every time `html` is called, any previous DOM state will immediately be destroyed, meaning that inputs (and other stateful DOM elements) will behave unexpectedly in various situations. And because the rendering happens synchronously with a `innerHTML` assignment, it is fairly easy to run into situations where multiple DOM elements are performing synchronous reads followed by synchronous writes, where it would be better to do all of the reads together, followed by all of the writes.
Thanks for the info! This all makes sense to me intuitively, but I know I've been bitten in the butt several times by implementing clever caching schemes or something that end up slowing the app down more than I thought it would speed it up. It seems like it would be simple enough to set up a test case and benchmark this (you wrote a simple for loop above that should exhibit this behavior). I'm curious how much, if any, react actually ends up saving cycles when a programmer does the same code naively in react and naively in the browser. I think it would make for some interesting benchmarks at least :)
Agreed that investing in standards is always a good bet. But at the same time, we have so many web frameworks in part because what is spec'd in plain JavaScript/HTML/CSS is not quite high-level enough to be really be a productive foundation just on its own. Going all the way back to raw `Document.createElement` will come with its own special pain.
With the WebComponents movement though, we are getting ever closer to being able to rely on native browser functionality for a good share of what frameworks set out to do. We're not all the way to bliss without frameworks, but for what it's worth here is my 481-byte library to support template interpolation with event binding in order to make WebComponents pretty workable mostly as-is: https://github.com/dchester/yhtml
I also sometimes enjoy this approach of starting from absolutely nothing.
Instead of taking the path of starting with DOM manipulation and then going to a framework as necessary, I've kept really trying to make raw web components work, but kept finding that I wanted just a little bit more.
I managed to get the more I wanted -- sensible template interpolation with event binding -- boiled down to a tag function in 481 bytes / 12 lines of (dense) source code, which I feel like is small enough that you can copy/paste it around and not feel to bad about it. It's here if anyone cares to look: https://github.com/dchester/yhtml
Wow! What a coincidence! I literally have something extremely similar which I called `$e`. I think one crucial thing you did not mention is that this conforms to the signature of jsxFactory, meaning if you run some kind of transpiler that supports jsx, you can literally do:
parent.append(<p>Hello</p>);
(Mine has slightly more functionalities, such as allowing passing in styles as a object, auto concatenate array of strings for class names, etc.)
Joining this thread to say that I, too, have written a very similar function and also use jsxFactory to have JSX support in personal projects. I find that using it along with an extremely simple implementation of a kind of state listener[0] produces something really nice for small projects.
Oh, didn’t expect a reference to me at all, was thinking that the function was very similar to mine :)
That function is now in my must-haves in my new Django side projects; I usually overuse them before I finally move to a JSX-based UI library. It’s great for simple DOM manipulation… and for me it seems to create a semi-simple migration path in the case the code gets complex.
We have used Automerge a bunch, but found that there is a threshold where beyond a given document size, performance gets exponentially worse, until even trivial updates take many seconds' worth of CPU. That is often how it works when the document end state is exclusively the sum of all the edits that have ever happened.
Our answer was to reimplement the Automerge API with different mechanics underneath that allows for a "snapshots + recent changes" paradigm, instead of "the doc is the sum of all changes". That way performance doesn't have to degrade over time as changes accumulate.
This is an implementation problem with automerge. I wrote a blog post last year about CRDT performance. I re-ran the benchmarks a couple months ago. Automerge has improved a lot since then, but a simple benchmark test (automerge-perf[1]) still takes 200MB of RAM using automerge-rs. Yjs and Diamond types can run the same benchmark in 4mb / 2mb of ram respectively.
I've had a chat with some of the automerge people about it. They're working on it, and I've shared the techniques I'm using in diamond types (and all the code). Its just an implementation bottleneck.
We wrote about our journey[0] to sanity after Tailwind:
The answer is to do it all in strategic moderation: Use a subset of tailwind just for spacing and layout and revel in simple things being simple. Use modular CSS for UI patterns and revel in the readability of your HTML and visual consistency of your UI. And then use custom scoped CSS where required, and revel in interesting things being only as hard as need be, without ruining everything else along the way.
This sounds cool. We wrote a git-based CMS[0] that is a little different. It has a nice-enough UI for creating and editing markdown documents, which are stored in git. And then it has a JSON API so that your main site can fetch the content and style / format however it likes. Users log in with OAuth or local passwords and their edits end up as git commits that are attributed to them.
We built El[0] with the goal of making a minimal framework for building web apps. As a data point, it has a built-in observable store, reactive templates, scoped subset of scss, no dependencies, and can almost fit in a single network packet.
> I could have chosen to rebuild the entire UI as a template string as a function of state, but instead, it is ultimately more performant to create these DOM helper methods and modify what I want.
I like the effort, but this is basically admitting defeat. Just naively using template strings and re-rendering the whole app on every change of state _is_ too slow, and so the author falls back instead to rendering via a series of bespoke methods that mix logic and template strings and DOM methods all interspersed. It still has all the shortcomings of 00's-era PHP, or piles of jQuery.
It is possible to do one step better than this still with vanilla js, if you use Web Components / custom elements. That way you can have each type of custom element (todo-app, todo-list, todo-item, etc) have its own render function using template strings, and you can still sneak in custom optimizations (like direct DOM insertion when adding and item to a long list instead of re-rendering).
But in the end, it's just so wonderful to be able to have reactive state and efficiently render your app from the state, and forget about all the rest.
We developed El.js[1] with all of this in mind, to be as little as possible on top of vanilla js and web components, to get the reactivity and efficient rendering.
WebComponents give us so much, but just not quite enough to be usable on their own without a little bit more on top. Reactivity with observable store and templating with event binding are the big missing pieces.
[0]: https://github.com/frameable/el