Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> b

I investigated wasm as a scripting engine (not a compile target, which is a orettt great use case for wasm) and the thing that killed it for me was how difficult it was to do zero-copy host-wasm memory sharing. Wasm is designed to be sandboxed and breaking those guarantees is hard. The problems I hit were: 1. Many languages (AssemblyScript, Grain) expect to own the entire linear memory, so even with WASM shared memory, it isn’t safe to actually use it unless you jump through hoops (eg use AssemblyScript GC functions to allocate and pin memory for host use), 2. WASM multi memory promises to solve this by letting you attach additional separate linear memories but neither AssemblyScript nor Grain support it. Additionally the wasmer runtime doesn’t yet support it.

I didn’t investigate whether Rust or Emscripten had either of these issues (I assume not), but they felt like a poor choice for “scripting”. I also didn’t investigate using V8 or other JavaScript engines to run wasm.

I guess the performance depends on how much you have to serialise. The ideal is to work in SharedArrayBuffers directly on linear structures of primitives and avoiding serialisation altogether. I did this for a (JavaScript) particle system where the simulation is in a web worker, that way you can achieve zero copy and it’s fast. But yeah once you have to cross the boundary and convert those or copy data, performance is hurt quite a bit…



Pardon the delay. Love this analysis. I think you were the person who's particle system post that inspired me to give wasm a deeper look (as I was saying mine is just turn based games so not exactly a demanding workload).

My thought was - can I take a language that I am comfortable with and is reasonable fast (both to develop and from perf persepective) and use that as my "logic layer". C and Rust would also have been great but for me C's memory management (Id be lying if I said I am comfortable after being away for over a decade) and battery-light standard library (again not sure what changed in last 10+ years) kept me at Go (Rust learning curve I did not even want to think about).

Back to the performance penalties - I structured it so that payloads are small and all state is kept on the wasm side with just batched messages to update view state. If I ever need to do do high FPS things il have to dig more into SharedArrayBuffers (but felt this was pain to get working). But then Id have reimplement a lot of the libs like phaserjs etc?


I don't think I posted about my particle system, so I don't think I'm the same person.

> But then Id have reimplement a lot of the libs like phaserjs etc?

The way I see it is that you either:

1. Do everything in WASM, using your languages native libraries (eg Emscripten supports SDL and WebGL, for example -- I hear newest wasm version also makes more directly available from browser WASM)

2. Use WASM to offload certain workfloads or simulation, and send data to Javascript and do all rendering in Javascript (using three.js, PIXI.js, or WebGL directly).

If using 2, you either send updates (I made a little toy test engine where I wrote messages to a SharedArrayBuffer that the JS side could read), or you operate directly on primitives in a SharedArrayBuffer (fastest since there's no serialisation needed, but harder to do).

Note for my particle system, I used PIXI.js to render and the webworker to simulate. I had a read index and a write index and the simulation would basically read the particle attributes from the read index and write them to the write index. The read index would be incremented for each particle and the write index only for live particles, meaning that as particles died it auto compacted:

    let read_index = 0
    let write_index = 0
    let numParticles = liveParticles
    const view = new DataView(sharedBuffer)
    do {
        const x = view.getFloat32(read_index + 0)
        const y = view.getFloat32(read_index + 4)
        const pos = update(x, y) // whatever your update logic actually is
        const stillAlive = // ...
        if (stillAlive) {
            view.setFloat32(write_index + 0, pos.x)
            view.setFloat32(write_index + 4, pos.y)
            // only advance for live particles
            write_index += SIZE_OF_PARTICLE_IN_BYTES
        }
        read_index += SIZE_OF_PARTICLE_IN_BYTES
    } while (--numParticles > 0)
Simplified for illustrative purposes. In real life, I used a wrapper than managed offsets automatically. But the point is that its a flat buffer, we don't try to serialise objects other than to read the properties out and write them back.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: