Hacker Newsnew | past | comments | ask | show | jobs | submit | sfvisser's commentslogin

I use the “HN Dark Mode” add-on set to “auto” so it switches with my OS preferences.

Both on iPhone and Mac.


Then you switch?

My entire OS, most apps and 90% of websites switch automatically with a single keyboard shortcut.


90% of websites have built-in keyboard shortcuts for switching theme? We must visit different wevsites.

I use an extension for that.


No, you change your OS preference and then most websites that are coded correctly will follow. I do it by sunrise and sunset but have a key binding to override it.

“The Netherlands” isn’t selling anything.

The Dutch national government mandated login system relies on technologies and hosting of a private company that was in conversation with an American counterpart about a possible acquisition.

Bad? Yes

The Netherlands selling their login service? No


I don’t understand this comment, yes everything going over the wire is bits, but both endpoints need to know how to interpret this data, right? Types are a great tool to do this. They can even drive the exact wire protocol, verification of both data and protocol version.

So it’s hard to see how types get in the way instead of being the ultimate toolset for shaping distributed communication protocols.


Bits get lost, if you don’t have protocol verification you get mismatched types.

Types naively used can fall apart pretty easily. Suppose you have some data being sent in three chunks. Suppose you get chunk 1 and chunk 3 but chunk 2 arrives corrupted for whatever reason. What do you do? Do you reject the entire object since it doesn’t conform to the type spec? Maybe you do, maybe you don’t, or maybe you structure the type around it to handle that.

But let’s dissect that last suggestion; suppose I do modify the type to encode that. Suddenly pretty much every field more or less just because Maybe/Optional. Once everything is Optional, you don’t really have a “type” anymore, you have a a runtime check of the type everywhere. This isn’t radically different than regular dynamic typing.

There are more elaborate type systems that do encode these things better like session types, and I should clarify that I don’t think that those get in the way. I just think that stuff like the C type system or HM type systems stop being useful, because these type systems don’t have the best way to encode the non-determinism of distributed stuff.

You can of course ameliorate this somewhat with higher level protocols like HTTP, and once you get to that level types do map pretty well and you should use them. I just have mixed feelings for low-level network stuff.


> But let’s dissect that last suggestion; suppose I do modify the type to encode that. Suddenly pretty much every field more or less just because Maybe/Optional. Once everything is Optional, you don’t really have a “type” anymore, you have a a runtime check of the type everywhere. This isn’t radically different than regular dynamic typing.

Of course it’s different. You have a type that accurately reflects your domain/data model. Doing that helps to ensure you know to implement the necessary runtime checks, correctly. It can also help you avoid implementing a lot of superfluous runtime checks for conditions you don’t expect to handle (and to treat those conditions as invariant violations instead).


No, it really isn’t that different. If I had a dynamic type system I would have to null check everything. If I have declare everything as a Maybe, I would have to null check everything.

For things that are invariants, that’s also trivial to check against with `if(!isValid(obj)) throw Error`.


Sure. The difference is that with a strong typing system, the compiler makes sure you write those checks. I know you know this, but that’s the confusion in this thread. For me too, I find static type systems give a lot more assurance in this way. Of course it breaks down if you assume the wrong type for the data coming in, but that’s unavoidable. At least you can contain the problem and ensure good error reports.


The point of a type system isn’t ever that you don’t have to check the things that make a value represent the type you intend to assign it. The point is to encode precisely the things that you need to be true for that assignment to succeed correctly. If everything is in fact modeled as an Option, then yes you have to check each thing for Some before accessing its value.

The type is a way to communicate (to the compiler, to other devs, to future you) that those are the expected invariants.

The check for invariants is trivial as you say. The value of types is in expressing what those invariants are in the first place.


You missed the entire point of the strong static typing.


I don’t think I did. I am one of the very few people who have had paying jobs doing Scala, Haskell, and F#. I have also had paying jobs doing Clojure and Erlang: dynamic languages commonly used for distributed apps.

I like HM type systems a lot. I’ve given talks on type systems, I was working on trying to extend type systems to deal with these particular problems in grad school. This isn’t meant to a statements on types entirely. I am arguing that most systems don’t encode for a lot of uncertainty that you find when going over the network.


You're conflating types with the encoding/decoding problem. Maybe your paying jobs didn't provide you with enough room to distinguish between these two problems. Types can be encoded optimally with a minimally-required bits representation (for instance: https://hackage.haskell.org/package/flat), or they can be encoded redundantly with all default/recovery/omission information, and what you actually do with that encoding on the wire in a distributed system with or without versioning is up to you and it doesn't depend on the specific type system of your language, but the strong type system offers you unmatched precision both at program boundaries where encoding happens, and in business logic. Once you've got that `Maybe a` you can (<$>) in exactly one place at the program's boundary, and then proceed as if your data has always been provided without omission. And then you can combine (<$>) with `Alternative f` to deal with your distributed systems' silly payloads in a versioned manner. What's your dynamic language's null-checking equivalent for it?


With all due respect, you can use all of those languages and their type systems without recognizing their value.

For ensuring bits don't get lost, you use protocols like TCP. For ensuring they don't silently flip on you, you use ECC.

Complaining that static types don't guard you against lost packets and bit flips is missing the point.


With all due respect, you really do not understand these protocols if you think “just use TCP and ECC” addresses my complaints.

Again, it’s not that I have an issue with static types “not protecting you”, I am saying that you have to encode for this uncertainty regardless of the language you use. The way you typically encode for that uncertainty is to use an algebraic data type like Maybe or Optional. Checking against a Maybe for every field ends up being the same checks you would be doing with a dynamic language.

I don’t really feel the need to list out my full resume, but I do think it is very likely that I understand type systems better than you do.


Fair enough, though I feel so entirely differently that your position baffles me.

Gleam is still new to me, but my experience writing parsers in Haskell and handling error cases succinctly through functors was such a pleasant departure from my experiences in languages that lack typeclasses, higher-kinded types, and the abstractions they allow.

The program flowed happily through my Eithers until it encountered an error, at which point that was raised with a nice summary.

Part of that was GHC extensions though they could easily be translated into boilerplate, and that only had to be done once per class.

Gleam will likely never live to that level of programmer joy; what excites me is that it’s trying to bring some of it to BEAM.

It’s more than likely your knowledge of type systems far exceeds mine—I’m frankly not the theory type. My love for them comes from having written code both ways, in C, Python, Lisp, and Haskell. Haskell’s types were such a boon, and it’s not the HM inference at all.


> ends up being the same checks you would be doing with a dynamic language

Sure thing. Unless dev forgets to do (some of) these checks, or some code downstream changes and upstream checks become gibberish or insufficient.


I know everyone says that this is a huge issue, and I am sure you can point to an example, but I haven’t found that types prevented a lot of issues like this any better than something like Erlang’s assertion-based system.


When you say "any better than" are you referring to the runtive vs comptime difference?


While I don't agree with the OP about type systems, I understand what they mean about erlang. When an erlang node joins a cluster, it can't make any assumptions about the other nodes, because there is no guarantee that the other nodes are running the same code. That's perfectly fine in erlang, and the language is written in a way that makes that situation possible to deal with (using pattern matching).


Reminds me of this classic doing the same: http://blog.sigfpe.com/2006/11/from-l-theorem-to-spreadsheet...


I’ve read the post carefully and I still don’t get how they proved Santa Claus without proving the proposition.


Even if you can reason through a code base a bisect can still be much quicker.

Instead of understanding the code you only need to understand the bug. Much easier!


Don’t know about difficult, but at least less elegant. Lazy evaluation, type inference, abstractions like Functor/Applicative/Alternative/Monad make them so incredibly natural to work with in a language like Haskell. Sure, they exist in other languages (made a few myself) but it’s not the same.


Yes, I'm writing a parser in Unison (in the Haskell family) for ASN.1 at the moment. It's so clean to write parsers with parser combinators.

For example Asn1Type can be of form Builtin, Referenced, or Constrained. So a sum type.

    parseType = Builtin <$> parseBuiltin <|> (Referenced <$> parseReferenced) <|> (Constrained <$> parseConstrained)
Assuming you have the parsers for Builtin, Referenced, and Constrained, you're golden. (Haskell PCs look very similar, possibly even exactly the same minus different parenthesis for operator precedence reasons.

Compare Parsy for Python, particularly the verbosity (this parses SELECT statements in SQL):

    select = seq(
    _select=SELECT + space,
    columns=column_expr.sep_by(padding + string(",") + padding, min=1),
    _from=space + FROM + space,
    table=table,
    where=(space >> WHERE >> space >> comparison).optional(),
    _end=padding + string(";"),
).combine_dict(Select)

The same thing in a FP-style language would be something like

    eq "SELECT" *> 
      Select 
      <*> (sepBy columnExpr (eq ",")) <* (eq "FROM") 
      <*> parseTable 
      <*> optional (eq "WHERE") <* eq ";"
which would feed into something like

    type Select = { cols: [ColumnExpr], table: Table, where: Optional Where}


We use WASM quite a bit for embedding a ton of Rust code with very company specific domain code into our web frontend. Pretty cool, because now your backend and frontend can share all kinds of logic without endless network calls.

But it’s safe to say that the interaction layer between the two is extremely painful. We have nicely modeled type-safe code in both the Rust and TypeScript world and an extremely janky layer in between. You need a lot of inherently slow and unsafe glue code to make anything work. Part is WASM related, part of it wasm-bindgen. What were they thinking?

I’ve read that WASM isn’t designed with this purpose in mind to go back and forth over the boundary often. That it fits the purpose more of heaving longer running compute in the background and bring over some chunk of data in the end. Why create a generic bytecode execution platform and limit the use case so much? Not everyone is building an in-browser crypto miner.

The whole WASM story is confusing to me.


My reading of it is that the people furthering WASM aren't really associated with just browsers anymore and they are building a whole new VM ecosystem that the browser people aren't interested in. This is just my take since I am not internal to those organizations. But you have the whole web assembly component model and browsers just do not seem interested in picking that up at all.

So on the one side you have organizations that definitely don't want to easily give network/filesystem/etc. access to code and on the other side you have people wanting it to be easier to get this access. The browser is the main driving force for WASM, as I see it, because outside of the browser the need for sandboxing is limited to plugins (where LUA often gets used) since otherwise you can run a binary or a docker container. So WASM doesn't really have much impetus to improve beyond compute.


> So on the one side you have organizations that definitely don't want to easily give network/filesystem/etc. access to code and on the other side you have people wanting it to be easier to get this access

I don't think this is entirely fair or accurate. This isn't how Wasm runtimes work. Making it possible for the sandbox to explicitly request specific resource access is not quite the same thing as what you're implying here.

> The browser is the main driving force for WASM, as I see it

This hasn't been the case for a while. In your first paragraph you yourself say that 'the people furthering WASM are [...] building a whole new VM ecosystem that the browser people aren't interested in' - if that's the case, how can the browser be the main driving force for Wasm? It's true, though, that there's verey little revenue in browser-based Wasm. There is revenue in enterprise compute.

> because outside of the browser the need for sandboxing is limited to plugins (where LUA often gets used) since otherwise you can run a binary or a docker container

Not exactly true when you consider that docker containers are orders of magnitude bigger, slower to mirror and start up, require architecture specific binaries, are not great at actually 'containing' fallout from insecure code, supply chain vulns, etc.. The potential benefits to enterprise orgs that ship thousands of multi-gig docker containers a week with microservices architectures that just run simple business logic, are very substantial. They just rarely make it to the hn frontpage, because they really are boring.

However, the Wasm push in enterprise compute is real, and the value is real. But you're right that the ecosystem and its sponsorship is still struggling - in some part due to lack of support for the component model by the browser people. The component model support introduced in go 1.25 has been huge though, at least for the (imho bigger) enterprise compute use case, and the upcoming update to the component model (wasi p3) should make a ton of this stuff way more usable. So it's a really interesting time for Wasm.


> The potential benefits to enterprise orgs that ship thousands of multi-gig docker containers a week with microservices architectures that just run simple business logic, are very substantial.

What are you talking about? Alpine container image is <5MB. Debian container image (if you really need glibc) is 30MB. wasmtime is 50MB.

If a service has a multi-gig container, that is for other stuff than the Docker overhead itself, so would also be a multi-gig app for WASM too.

Also, Docker images get overlayed. So if I have many Go or Rust apps running on Alpine or Debian as simple static binaries, the 5MB/30MB base system only exists once. (Same as a wasmtime binary running multiple programs).


> Alpine container image is <5MB. Debian container image (if you really need glibc) is 30MB. wasmtime is 50MB.

That's not the deployment model of wasm. You don't ship the runtime and the code in a container.

If you look at crun, it can detect if your container is wasm and run it automatically, without your container bundling the runtime. I don't know what crun does, but in wasmcloud for example, you're running multiple different wasm applications atop the same wam runtime. https://github.com/containers/crun/blob/main/docs/wasm-wasi-...


My point is that that's exactly the deployment model of Docker. So if I have 20 apps that are a Go binary + config on top of Alpine, that Alpine layer will only exist once and be shared by all the containers.

If I have 20 apps that depend on a 300MB bundle of C++ libraries + ~10MB for each app, as long as the versions are the same, and I am halfway competent at writing containers, the storage usage won't be 20 * 310MB, but 300MB + 20 * 10MB.

Of course in practice each of the 20 different C++ apps will depend on a lot of random mutually exclusive stuff leading to huge sizes. But there's rarely any reason for 20 Go (or Rust) apps to base their containers on anything other than lean Alpine or Debian containers.

Even for deploying wasm containers. Maybe there are certain technical reasons why they needed an alternate "container" runtime (wasi) to run wasm workloads with CRI orchestration, but size is not a legitimate reason. If you made a standard container image with the wasm runtime and all wasm applications simply base off that image and add the code, the wasm runtime will be shared between them, and only the code will be unique.

"Ah, but each container will run it's own separate runtime process." Sure, but the most valuable resource that probably wastes is a PID (and however many TIDs). Processes exec'ing the same program will share a .text and .rodata sections and the .data and .bss segments are COW'ed.

Assuming the memory usage of the wasm runtime (.data and .bss modifications + stack and heap usage) is vaguely k + sum(p_i) where p_i is some value associated with process i, then running a single runtime instead of running n runtimes saves (n - 1) * k memory. The question then becomes how much is k. If k is small (a couple megs), then there really isn't any significant advantage to it, unless you're running an order of magnitude more wasm processes than you would traditional containers. Or, in other words if p_i is typically small. Or, in other other words, if p_i/k is small.

If p_i/k is large (if your programs have a significant size), wasi provides no significant size advantage, on disk or in memory, over just running the wasm runtime in a traditional container. Maybe there are other advantages, but size isn't one of them.


> "Ah, but each container will run it's own separate runtime process." Sure, but the most valuable resource that probably wastes is a PID (and however many TIDs). Processes exec'ing the same program will share a .text and .rodata sections and the .data and .bss segments are COW'ed.

In terms of memory footprint, I can allow that the OS may be smart enough to share a lot of the exec'ed process if it's run multiple times. With Go and Rust and other statically compiled programs, that's going to scale to the number of instances of a service. With Node you might scale more, but then you need to start dynamically loading app code and that won't be shared.

With wasm hosts, you can just ship your app code, and ask the wasm host to provide libraries to you. So you can have vastly more memory sharing. Wasm allows a lot of what you term p_i to be shifted into k through this sharing.

But there's so many other reasons to have a a shared runtime rather than many processes.

Context switching can be a huge cost, one that a wasm host can potentially avoid as it switches across different app workloads. Folks see similar wins from v8 isolates, which for example CloudFlare has used on their worker platform to allow them to scale up to a massive number of ultra-light worker.

> Even for deploying wasm containers. Maybe there are certain technical reasons why they needed an alternate "container" runtime (wasi) to run wasm workloads with CRI orchestration, but size is not a legitimate reason. If you made a standard container image with the wasm runtime and all wasm applications simply base off that image and add the code, the wasm runtime will be shared between them, and only the code will be unique

The above talks to some of the technical reasons why an alternate runtime enables such leaps and bounds versus the container world. Memory size is absolutely the core reason; that you can fit lots of tiny micro-processes on a wasm host, and have them all sharing the same set of libraries. Disk size is a win for the same reason, that containers don't need to bundle their dependencies, just ask for them. There's a 2022 post talking about containers, isolates, and wasm, talking more to all this arch: https://notes.crmarsh.com/isolates-microvms-and-webassembly


If you want small function-style services then yea, that's valid, because p_i is really small.

The question is really if you want hundreds of big and medium-sized services on a server, or tens of thousands of tiny services. This is a design question. And while my personal preference would be for the former, probably because that's what I'm used to, I'll admit there could be certain advantages to the latter.

Good job, you've convinced me this can be valid.


These numbers are true, but you'd be amazed and the number of organisations that have containers that are just based on ubuntu:latest, and don't strip package cache etc.


ubuntu:latest is also 30MB, like Debian.

Obviously an unoptimized C++/Python stack that depends on a billion .so's (specific versions only) and pip packages is going to waste space. The advantage of containers for these apps is that it can "contain" the problem, without having to rewrite them.

The "modern" languages: Go and Rust produce apps that depend either only on glibc (Rust) or on nothing at all (Rust w/ musl and Go). You can plop these binaries on any Linux system and they will "just work" (provided the kernel isn't ancient). Sure, the binaries can be fat, but it's a few dozen megabytes at the worst. This is not an issue as long as you architect around it (prefer busybox-style everything-in-a-binary to coreutils-style many-binaries).

Moreover, a VM isn't much necessary, as these programming languages can be easily cross-compiled (especially Go, for which I have the most experience). Compared to C/C++ where cross-compiling is a massive pain which led to Java and it's VM dominating because it made cross-compilation unnecessary, I can run `GOOS=windows GOARCH=arm64 go build` and build a native windows arm64 binary from x86-64 Linux with nothing but the standard Go compiler.

The advantage of containers for Rust and Go lies in orchestration and separation of filesystem, user, ipc etc. namespaces. Especially orchestration in a distributed (cluster) environment. These containers need nothing more than the Alpine environment, configs, static data and the binary to run.

I fail to see what problem WASM is trying to solve in this space.


You know what would be cool? A built in way for your browser to automatically download and run local-first software as a docker container, in the background without user confirmation.

The problem with that idea is docker isn’t as secure as wasm is. That’s one big difference: wasm is designed for security in ways that docker is not.

The other big difference is that wasm is in-process, which theoretically should reduce overhead of switching between multiple separate running softwares.


That wouldn't be cross-platform. Browsers couldn't even ship SQL because it would inevitably tie them to sqlite, specifically, forever. They definitely can't ship something that requires a whole Linux kernel.


> Browsers couldn't even ship SQL because it would inevitably tie them to sqlite, specifically, forever.

Nonsense.

Chrome store their sqlite db in C:\Users\%USERNAME%\AppData\Local\Google\Chrome\User Data\Default\databases

And Firefox:

> Why Firefox Uses SQLite

> Cross-Platform Compatibility: Works seamlessly across all platforms Firefox supports.

https://www.w3resource.com/sqlite/snippets/firefox-sqlite-gu...


These are not used to expose SQL to web pages though, they're used only internally by the browser.


You said browsers didnt ship sqlite due to lack of cross-platform.

I disproved that.

Dont change the goalpost.


But you are changing the goalposts. Browser's couldn't ship SQL (the JavaScript feature called WebSQL) because it would tie the web ecosystem to the future of sqlite, which is a specific cathedral-type project.


Surely moving those containers to alpine would be 1000x easier than rewriting everything in wasm though.


Yeah, what you're talking about there is not what I was talking about.


Meanwhile the people using already established VM ecosystems, don't a value dropping several decades of IDEs, libraries and tools, for yet another VM redoing more or less the same, e.g. application servers in Kubernetes with WASM containers.


Already-established single-vendor VM ecosystems.


On the contrary,

https://en.wikipedia.org/wiki/List_of_Java_virtual_machines

And looking at other bytecode based systems, enough runtimes with multiple vendors.

https://en.wikipedia.org/wiki/Bytecode


I don’t know, I feel myself shifting the goalposts here, but at the same time, I don’t feel I’m unjustified in doing that. You’re probably not going to complain that the list of JVMs you linked is incomplete, even though there are probably small-scale JVMs in SIM cards and other such minuscule environments that are not listed there, and those could easily be something entirely unique that was written by a couple of guys in a room in 1997 and last had a feature added to it in 2000.

So I can’t help observing the non-obsolete JVMs not targeting an embedded usecase (and thus capable of supporting the kind of tools you’re referring to) are all either research projects or OpenJDK-based (or both). Harmony is dead. GCJ is dead. (Does Dalvik count?..) Oracle has already perpetrated half of a rugpull on OpenJDK and mostly murdered the JCP, and that ratchet only ever goes in one direction. And the tooling of the kind you describe is mostly sold by a handful of companies for a handful of notable languages, none of which were born outside the JVM context: Kotlin is the last entrant there, and it is very good, but the tooling is also very distinctly a single-vendor thing to the point that Google didn’t roll their own. (Both Clojure and Scala are lovely, but I don’t think they’re on the same side of notable as Java or Kotlin. Groovy still exists.) Jython was almost the last holdout for targeting the JVM, and it’s also effectively dead. JRuby is... surprisingly alive? But I don’t believe it has tooling worth a damn.

I guess what I want to say is, for a very long time—though not for the entirety of its history—it’s been an ecosystem centered on Java and not really on the JVM. Microsoft’s CLR actually made a fair attempt at, let’s say, multilingualism for significantly longer, but after the DLR flopped that was mostly the end of that. And, of course, instead of a Java-centric ecosystem it was from the start a Microsoft-centric ecosystem, and after the stunts they’ve pulled (most recently) with their VSCode extensions and language servers I don’t trust any development tool Microsoft releases to continue to exist in any given form for any given duration.

So I don’t think there’s really anything mature that would be VM-centric in the way that Wasm is, that would furthermore not go up in smoke if a single vendor decided to pull out. That’s not to say I think Wasm is the pinnacle of computing–I think it’s pretty wasteful, actually, and annoyingly restrictive in what languages it can support. I just don’t think it really retreads the paths of any of the (two) existing prominent VM-based platforms.


So if it is about shifting the goalposts, bytecode based environments, multilingual as well, exist since UNCOL became an idea in 1958.

https://en.wikipedia.org/wiki/UNCOL

Plenty of history lessons available on digital archives, for those that care to learn about them.

As for "single vendor", it isn't as if as usual only a couple of big shots aren't driving the standard.


WASM as it is, is good enough for non-trivial graphics and geometry workloads - visibility culling (given octree/frustum), data de-serialization (pointclouds, meshes), and actual BREP modeling. All of these a) are non-trivial to implement b) would be a pain to rewrite and maintain c) run pretty swell in the wasm.

I agree WASM has it’s drawbacks but the execution model is mostly fine for these types of task where you offload the task to a worker and are fine waiting a millisecond or two for the response.

The main benefit for complex tasks like above is that when a product needs to support isomorphic web and native experience - quite many use cases actually in CAD, graphics & gis) - based on complex computation you maintain, the implementation and maintenance load drops to a half. Ie these _could_ be eg typescript but then maintaining feature parity becomes _much_ more burdensome.


> I’ve read that WASM isn’t designed with this purpose in mind to go back and forth over the boundary often.

It's fine and fast enough as long as you don't need to pass complex data types back and forth. For instance WebGL and WebGPU WASM applications may call into JS thousands of times per frame. The actual WASM-to-JS call overhead itself is negligible (in any case, much less than the time spent inside the native WebGL or WebGPU implementation), but you really need to restrict yourself to directly passing integers and floats for 'high frequency calls'.

Those problems are quite similar to any FFI scenario though (e.g. calling from any high level language into restricted C APIs).


For performant WASM/JS interchange you might be interested in Sledgehammer Bindgen.

https://github.com/ealmloff/sledgehammer_bindgen


> Why create a generic bytecode execution platform and limit the use case so much?

How would you make such a thing without limiting it in some such way?


By giving it dom support


Which meant having garbage collection working accross the WASM/JS barrier. This is now possible, but was not exactly trivial to design. It's a good thing that this was not rushed out.


Check the context of the quote. DOM support is unrelated, it was about the Rust/TypeScript interface.


What I got from gp is that interface is necessary because the wasm environment itself is limited. Quote:

> You need a lot of inherently slow and unsafe glue code to make anything work.

Idea being that with dom support you’d need less unsafe glue code.

Of course I was being glib but it is the point of TFA after all.


The entire DOM API is very coupled to JS, it's all designed with JS in mind, any new and future proposed changes are thought about solely through the lens of JS.

If they introduced a WASM API it would perpetually be a few months/years behind the JS one, any new features would have to be implemented in both etc.

I can see why it's not happened

(edit) And yes, I think the intention of WASM was either heavy processing, or UI elements more along the lines of what used to be done with Java applets etc. potentially using canvas and bypassing the DOM entirely, not as an alternative to JS for doing `document.createElement`


>> The entire DOM API is very coupled to JS,

Is it though? I thought it was all specified in WebIDL and all the browser vendors generate C++ headers from it too.


The confusion is perhaps due to your usage focus and the security constraints browser compiler makers face to make something secure.

First off, remember that initially all we had was JS, then Asm.JS was forced down Apple throats by being "just" a JS compatible performance hack (remember that Google had tried to introduce NaCl beforehand but never got traction). You can still see the Asm.JS lineage in how Wasm branching opcodes work (you can always easily decompose them into while loops together with break and continue instructions).

The target market for NaCl, Asm.JS and Wasm seems to have been focused on enabling porting C/C++ games even if other usages was always of interest, so while interop times can be painful it's usually not a major factor.

Secondly, As a compiler maker (and to look at performance profiles), I usually place languages into 3 categories.

Category 1: Plain-memory-accessors, objects are usually a pointer number + offsets for members, more or less manually managed memory. Cache friendlyness is your own worry, CPU instructions are always simple.

C, C++, Rust, Zig, Wasm/Asm.JS, etc goes here.

Category 2: GC'd offset-languagses, while we still have pointers(now called references) they're usually restricted from being directly mutated, instead going through specialized access instructions, however as with category 1 the actual value can often be accessed with the pointer+offset and object layouts are _fixed_ so less freedom vs JS but higher perf.

Also there can often be GC-specific instructions like read/write-barriers associated with object accesses. Performance for actual instructions is still usually good but GC's can affect access patterns to increase costs and some GC collection unpredictability.

Java, C#, Lisps, high perf functional languages,etc usually belong here (with exceptions).

Category 3: GC'd free-prop languages, objects are no longer of fixed size (you can add properties after creation), runtimes like V8 tries their best to optimize this away to approach Category 2 languages but abuse things enough and you'll run out a performance cliff. Every runtime optimization requires _very careful_ design of fallbacks that can affect practically almost any other part of the runtime (these manifest as type-confusion vulnerabilities if you look at bug-reports) as well as how native-bindings are handled.

JS, Python, Lua, Ruby, etc goes here.

Naturally some languages/runtimes can straddle these lines (.NET/CIL has always been able to run C as well as later JS, Ruby and Python in addition to C# and today C# itself is gaining many category 1 features), I'm mostly putting the languages into the categories where the majority of user created code runs.

To get back to the "troubles" of Wasm<->JS, as you noticed they are of category 1 and 3, since Wasm is "wrapped" by JS you can usually reach into Wasm memory from JS since it's "just an buffer", the end-user security implications are fairly low since the JS has well defined bounds checking (outside of performance costs).

The other direction is a pure clusterf from a compiler writers point of view, remember that most of these optimizations of Cat 3 languages have security implications? Allowing access would require every precondition check to be replicated on the Wasm side as well as in the main JS runtime (or build a unified runtime but optimization strategies are often different).

The new Wasm-GC (finally usable with Safari since late last year) allows GC'd Catgory 2 languages to be built directly to Wasm (and not ship their own GC via Cat 1 emulation like C#/Blazor) or be compiled to JS, and even here they punted any access to category 3 (JS) objects, basically marking them as opaque objects that can be referred and passed back to JS (improvement over previous WASM since there is no extra GC synching as one GC handles it all but still no direct access standardized iirc).

So, security has so far taken a center stage over usability. They fix things as people complain but it's not a fast process.


> You need a lot of inherently slow and unsafe glue code to make anything work.

That describes much of modern computing.


>The whole WASM story is confusing to me.

Think of it as a backend and not as library and it clicks.


Yes, but that’s exactly what I’m trying to avoid.


WASM is just hotfixing javascript to use any language people want.

It's all about javascript being popular and being the standard language, js is not a great language, but it's standard across every computer, which dwarfs anything that can be said about js.

Adjusting browsers so they can use WASM was easy to do, but telling browser vendors to make the DOM work was obviously more difficult, because they might handle the DOM in various ways.

Not to mention js engines are very complicated.


WASM is not a web scripting language

Trying to shoehorn Rust as a web scripting language was your second mistake

Your first mistake was to mix Rust, TypeScript and JavaScript only just to add logic to your HTML buttons

I swear, things get worse every day on this planet


WASM enables things like running a 20 year old CAD engine written in C++ in the browser. It isn’t a scripting language, it’s a way to get high-performing native code into web apps with a sensible bridge to the JS engine. It gets us closer to the web as the universal platform.


The biggest problem solved by WASM is runtime portability. For security reasons many users and organizations will not download or install untrusted binaries. WASM provides a safer alternative in an often temporary way. The universal nature is an unintended byproduct of a naive sandbox, though still wonderful.


Why would WASM be any less secure than JavaScript?


It's not, they're saying it's a safer alternative to traditional binaries


Exactly, and the CAD engine doesn't need to know about nor access the DOM

> It gets us closer to the web as the universal platform.

As a target

I don't want a pseudo 'universal' platform owned by Big Tech; or by governments as a substitute

Google/Chrome controlled platform, no thanks

https://i.imgur.com/WfXEKSf.jpeg


So what should they have used to share logic between backend and frontend in a type safe way?


If the logic is compute heavy, Rust with Wasm can be a good approach. TypeScript on both ends is also a pragmatic choice. You can still have a boundary in the backend for lower level layers in Rust.

If the logic is merely about validation, then an IDL with codegen for TS and some backend language is probably better. There are also some more advanced languages targeting transpilation to both JS and a backend language such as Haxe, but they all have some trade-offs.


Actually, WASM will enable many languages better for web scripting than Javascript.


WASM certainly had the potential for this, but I am afraid without direct DOM access it is never going to happen.


It is happening. Leptos is an example: https://www.leptos.dev/

Dioxus is another: https://dioxuslabs.com/

C# with Avalonia for a different use case: https://avaloniaui.net/

Avalonia solitaire demo: https://solitaire.xaml.live/

Avalonia Visual Basic 6 clone: https://bandysc.github.io/AvaloniaVisualBasic6/

Blazor can run as WebAssembly on the client side if you choose that runtime mode: https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blaz...

Beyond the browser, Wasmer does WebAssembly on the serverside: https://wasmer.io/

Fermyon too: https://www.fermyon.com/

Extism is a framework for an application to support WebAssembly plugins: https://extism.org/


Btw there is also Dominator: https://github.com/Pauan/rust-dominator


how'd you compare dioxus and leptos?


If you're familiar with JS frameworks, you can think of it like this:

Dioxus : React :: Leptos : SolidJS

The key for me is that Leptos leans into a JSX-like templating syntax as opposed to Dioxus's H-like function calls. So, Leptos is a bit more readable in my opinion, but that probably stems from my web dev background.

The Dioxus README has a whole section comparing them -- https://github.com/DioxusLabs/dioxus#dioxus-vs-leptos


Not parent but GP: Love Leptos, I think they are on the right track. Dioxus is good too, I think it has wider scope and they also obtained funding from external sources while Leptos is completely volunteer based.


> Like it or not, we are already serving the machines.

The machines don’t give a shit, it’s the lawyers and bureaucrats you’re serving :)

Better or worse?


In postmodern societies, reality itself is structured by simulation—"codes, models, and signs are the organizing forms of a new social order where simulation rules".

The bureaucratic and legal apparatus you invoke are themselves caught up in this regime. Their procedures, paperwork, and legitimacy rely on referents—the "models" and "simulacra" of governance, law, and knowledge—that no longer point back to any fundamental, stable reality. What you serve, in effect, is the system of signification itself: simulation as reality, or—per Baudrillard—hyperreality, where "all distinctions between the real and the fictional, between a copy and the original, disappear".

"The spectacle is not a collection of images but a social relation among people, mediated by images." (Debord) Our social relations, governance, and even dissent become performances staged for the world's endless mediated feedback loop.

In this age, according to Heidegger, "everything becomes a 'picture', a 'set-up' for calculation, representation, and control." The machine is not just a device or a bureaucratic protocol—it is the mode of disclosure through which the world appears, and your sense of selfhood and agency are increasingly products (and objects) within this technological enframing.

Yada, yada, yada; the Matrix is real.

ie, you don't know the half of it, compadre.


Exactly right. Better have a domain layer with data types representing the domain object 1:1 and add one or more API layers on top for interacting with those for some modality. Creation, deletion, verification, auth etc.

The security failure is not the parsing library, but failing to model your application architecture properly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: