Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Flawless – Durable execution engine for Rust (flawless.dev)
304 points by bkolobara on Oct 25, 2023 | hide | past | favorite | 117 comments


How will this deal with workflow versioning? IMO the hardest problem in e.g. Temporal/Cadence


Hi! Author here.

I have spent a lot of time thinking about this and believe that the most straight forward solution for long running (or even forever running) workflows is to allow hot upgrades.

A hot upgrade would only succeed if you can exactly replay the existing side effect log history with the new code. Basically you do a catchup with the new code and just keep running once you catch up. If the new code diverges, the hot upgrade would fail and revert to the old one. In this case, a human would need to intervene and check what went wrong.

There are other approaches, but I feel like this is the simples one to understand and use in practice. During development you can already test if your code diverges, using existing logs.


Is this the problem where you have workflows live and in progress, yet you want to update the workflow and have things not break? Would you want to have multiple versions at once, or rather some way to migrate the in progress ones to the latest workflow definition?


The latter.

For short lived workflows, you may not care about updating; just let it finish.

For longer jobs, you want some way to replace the current logic and either resume from where the job left off, or restart it idempotently. Especially if your workflow spans months or years (which at least some of these systems are designed for).

The challenge is these systens shine when you manage the job state in-memory, but they don't "store" the data in a traditional sense. They just replay your logic and replay the original I/O results. So if your logic changes, the replay breaks and your state goes bye-bye.

(I think of it similarly to React's "rules of hooks": you can't do anything that makes the function call key APIs in a different order than previous executions)

So you either accept that you can never update an in-flight job (in a meaningful way, at least), or you track job state in some other system and throw away the distinguishing feature of these systems.

I'm curious how people normally handle this. When I worked with Azure Durable Functions I couldn't find a way around this.


I think the problem works both ways too, because for an in progress (but long-running) workflow, you may not want to retroactively apply your new business logic for the part that's already run because that would be unexpected. But for the logic that hasn't run yet, you would certainly want the latest and greatest.

I wonder if there could be an approach where you have both versions live simultaneously, and introduce some sort of "checkpoint" into the old version that would act similar to a DB migration. When re-computing a workflow you could then start from the latest checkpoint, but any workflows that were created with the old version that haven't reached a checkpoint would continue to run the old code until it does.


You’d likely want either option depending on the lifetime.


True. That’s one thing well handled by conductor instead.


Our field is getting closer and closer to architecture and medicine now. We've tech like this we can come out of the fiddling age and deep dive into a serious engineering culture.

Quick question: how do you prevent persisting the effect of a DOS attack on those systems?


> Our field is getting closer and closer to architecture and medicine now. We've tech like this we can come out of the fiddling age and deep dive into a serious engineering culture.

This comment reads like ml-generated nonsense. Architecture is summed up as subjective opinions bounded by regulatory constraints, and medicine is still empirical knowledge validated through experiments. None of these fields have even a passing resemblance with engineering.


That's why I said architecture and medicine and not rocket science.

We have a long way to go.

However, I think you are very reductive in your descriptions of those fields.

Sorry I meant, as a large language model...


> That's why I said architecture and medicine and not rocket science.

You conflated architecture and medicine to a builduo to "serious engineering culture". There is no way around it: it's nonsense. Even though they are highly specialized fields, neither architecture not medicine has anything to do with the basic principles of engineering. It's just nonsensical word soup.


> This comment reads like ml-generated nonsense

Not a fan of the new meta


Medicine has incredibly rigorous engineering processes; if anything I'd say the software world at large has a thing or two to learn from medicine (particularly class II+ devices) about building robust, long lasting software that you know works the way you designed it to

See: ISO13485, IEC60601, QMS, etc


> Medicine has incredibly rigorous engineering processes (...)

You're confusing with mechanical engineering applied to medical devices with medicine.

It's like claiming bakeries have incredible rigorous engineering processes just because mechanical and industrial engineers design devices and automate processes.

Not the same thing, is it?

> See: ISO13485, IEC60601, QMS, etc

You're just pointing to standards that engineers working on specialized devices need to comply. One covers quality assurance and the other is focused on specialized electrical equipment.


That's true, those are merely popular examples of how engineering is done in med device, and not specific to the software side, though there are standards and guidance documents for specific fields. I didn't want to get too off in the weeds. But to be a 13485-compliant device developer, you need to have a competent requirements-driven, verified and validated development cycle, release process, and a complete design history with traceability all throughout, so you can immediately link problems to the actual components that caused them.

As an analogy, it takes the bill-of-materials approach done by popular package managers and makes it a whole lot more rigorous.


I do this, and it's a little onerous, but it's mostly just half-decent engineering. E.g. every commit has a ticket; every ticket has clear test criteria and a requirement to link back to (also a ticket, sometimes), every PR has a reviewer, every PR is tested by someone, every release knows what commits it has in it. Most of it can be done by Jira (or similar) and Git, and a fairly normal process. Not a Amazon-style 1000 changes a day sort of process, but still.


> But to be a 13485-compliant device developer, (...)

Those would be engineers, mainly mechanical engineers or electrical engineers.


I find that it’s more popular to pontificate about what it means for software to engineered to a high standard. That sorta rest on ignoring the fact there are localized fields where we know how to do it. People want to talk hypotheticals rather than go see how we do it.

We know how, it’s just slow, expensive and the tooling isn’t some fantasy perfect environment where safety and effectiveness is already built in and developers just do the business logic.


Do we know how? When I look at things like ISO 26262 I see an attempt to use poor tools to cobble together something barely adequate with the expenditure of an enormous amount of human labour. The results are nothing to write home about.


Reductive, rude and not productive.


Neither Architecture[1] nor medicine[2] are engineering disciplines, so I don't really see how getting close to them gets us to an engineering culture.

[1] Which is a form of art. Go look at a school of architecture at a university it will be in the Art faculty.

[2] Which is an applied science like engineering but is not an engineering discipline.


Does this determinism extend to floating point computations? This has historically been a pain point with multiplayer games where the client state has to be periodically re-synced with the server state due to slowly accumulating drift in floating point calculations.


When I implemented a WASM compiler, the only source of float-based non-determinism I found was in the exact byte representation of NaN. Floating point math is deterministic. See https://webassembly.org/docs/faq/#why-is-there-no-fast-math-... and https://github.com/WebAssembly/design/blob/main/Nondetermini....


> Floating point math is deterministic.

It is deterministic in a single machine, but it unfortunately is not deterministic across machines (and specially not across architectures)

... however, if you restrict yourself to a subset of floating point, it's straightforward to provide cross-platform determinism. That's what's the Rapier physics engine does

https://rapier.rs/docs/user_guides/rust/determinism/


On that note, Wasmtime has an option to enable NaN canonicalization, which should patch that hole, too: https://docs.rs/wasmtime/latest/wasmtime/struct.Config.html#...


It may also be thread nondeterminism, eg relying on the commutativity of addition etc.


The other common one is elementary functions from libm (i.e. exp/log etc)


Do note that sqrt should return the same results on all libm implementations, as well as functions like nextafter or scalbn.


That's probably more about GPUs cheating to be faster[1]. https://asawicki.info/news_1741_myths_about_floating-point_n...

[1]: Back in the day, CPU results could also vary based on path of execution; see tweet in article.


is one option to sync “bignum” structs instead of f64 or is that not realistic in terms of complexity/size?


Where is the state for the side effects stored? Say I have an AWS Lambda that I want to make idempotent. Lambdas don’t have local storage that persists across runs (unless you mount EBS volumes or something) so I presume state can be stored in a DB?


> Where is the state for the side effects stored?

That's what you'll be paying for when they release the product.


I love the animation showing the core principle and how it works, really well done.


Thank you! I hand coded it with just HTML, CSS and JavaScript and put a lot of effort and love into it. The code is not the prettiest, but the implementation is straight forward if anyone wants to check it out https://flawless.dev/js/how-does-it-work-animation.js


    // Update the `keyFrame`.
            keyFrame += 0.5;
    
    // This is just a huge state machine progression.
            switch (keyFrame) {
                // In the first 4 seconds we just run through the existing log messages.
                case 0.5:
                case 1:
                case 1.5:
    
    ...

Wow! I love the design. So simple.


I kinda wanted to see it fail at the http execution phase because that’s much more common due to timeouts when call external endpoints.


Looks interesting, I wonder if the method of marking functions as 'having side effects' is going to be easy to make fool proof. For instance, I assume that in the example the random number generation is a side effect because it comes from a RNG provided by flawless itself. Would this have worked with a regular Rust function as well?

I assume there is going to be some kind of test harness that allows developers to check their workflows.


Hi! I'm the author of flawless.

By using WebAssembly, it's kinda by default fool proof. WebAssemby explicitly requires you to declare host calls inside your modules. If you try to use another host call that is not provided by flawless, your module can't be instantiated.

It's also important to notice that there are multiple standardisation efforts going on in the WebAssembly space. For example if you are using the rust `rand` crate and compiling to WebAssembly, it's using the WASI host functions for generating random numbers. While I'm waiting for wasi, wasi-http and others to be standardised, I expose my own interface for now.

Obviously, this also has a big downside. You can't compile all Rust code to WebAssembly. However, I prefer this reject by default approach, so that you are guaranteed to never have unintended side effects.


Could you provide a way to mark function calls as side effects similar to Temporal?

I'm a bit worried about the limitation of having to use the flawless namespace functions for everything.


Flawless integrates with your existing system. You probably don't need the guarantees that flawless provides for all code, but just specific functions/workflows.


Seeing how both the RNG and the HTTP request are under the flawless namespace, this is going to require you to write 'std::flawless' instead of giving you access to the entire Rust ecosystem. A harness would solve most issues, but I wonder how much of the functionality you can map.

So far, this looks more like a scripting runtime which uses Rust.


Looks like a rust alternative to temporal using wasm as a runtime. Love it!

Founder of windmill.dev here which is another durability engine in Rust except it's a lot less elegant since we split our workflows into well defined steps in python/typescript/go/bash and can resume from incomplete steps only by restarting from the last step and storing the result of each step forever in our postgres db (using jsonb). The use-cases are clearly different and I can see flawless being so lightweight that you could use it to model UI flow state and scale it to millions on a small server as pointed out by the site.

This is fantastic, hopefully one day rust will power all distributed systems.


“Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.” – Virding’s first rule of programming


Looks like Flawless goal is to persist the intermediate state of some workflow and restart it from midpoint in case of a failure. That's quite different from Erlang, which was mostly aimed at developing software while having access only to prototype versions of hardware with multiple hardware issues. To me this approaches are opposite: Flawless will be stuck in a loop trying to finish a workflow that crashes in the middle, while Erlang will happily discard 50% of the traffic that somehow hits the hardware bug.


I like to see this on a spectrum on 'reliable computation' and 'high availability' (and interruptible computing) with continuations/CallCC, checkpoint/restore (CRIU, DMTCP), VM snapshots or (live-)migration, to double/triple hardware, and even dual implementation, but also Kafka.

I find it a bit disheartening that these features are not more pervasive on language, runtimes, libraries and OSes, so very often the actual solutions are either very kludgy or very high maintenance, whereas the primitives are really interesting for many other endeavours.


Functional programmers think this way though. Everything is data, or a manipulation/transform of data, and we can wrap functions with higher level things to track inputs and outputs and persist that.

Erlang won’t do this out of the box but it’s a menial addition to your system.

I guess I just don’t understand how this is such a novel idea.


It’s not a novel idea. At minimum, prior art is:

- Temporal / Cadence

- Amazon SWF

- Azure Durable Functions

Not even the web assembly part is novel - Temporal does that for several languages.

This seems to be a new implementation of an existing idea, and may end up being cleaner - that part is to be seen since there’s no actual product or code visible yet.


Another relevant language might be Vale (https://vale.dev), which is aiming for "perfect replayability": https://verdagon.dev/blog/perfect-replayability-prototyped



Even sometimes Erlang's claims around resilience and fault tolerance are a bit overblown,

yeah they should have been more humble about their system and called it Flawless :)


Yeah. Actors and the BEAM are just the runtime. On top of that you build your system/domain logic and part of that is persisting state at regular intervals and being able to restore that state from any kind of failure. You build a durable app, or you don’t.

“It’s commonly known under the name durable execution, and is so new that most developers have never heard of it”

Serious? I feel like this post is very naive and perhaps disingenuous in thinking that Erlang/Elixir developers are not accounting for this. I’ve been building apps this way for a long time, regardless of language.

Rewriting everything in rust doesn’t solve any of these issues. State is state.

This tool is literally dagster or prefect but it’s claiming to be a revolutionary new idea.


This appears to be a novel way to represent how the application interacts with state though, an API plus runtime.

I've used workflow engines before, which provide similar capabilities: execute a distributed process to completion in the presence of failures. However, you have to be really careful when building workflow applications not to introduce accidental side-effects (that are not modeled in the workflow directly), or else the result will be nondeterministic (unspecified at best).

I haven't used Dagster or Prefect but it looks like this tool uses a different approach. An application developer doesn't need to "model" their workflow using this tool - just implement it on top of the API that Flawless provides. It reminds me a bit of the AWS Flow framework [1] for AWS SimpleWorkflow, but when developing Flow applications (which are Java apps) you still have to be extremely careful not to accidentally introduce local side effects or nondeterminism.

Because this approach provides a deterministic runtime, I see it being plausible that developers could be significantly more confident that their code is, in fact, deterministic.

[1] https://aws.amazon.com/swf/details/flow/


> but it’s claiming to be a revolutionary new idea.

I haven't touched erlang in ages, but Joe Armstrong popped up in my youtube recommendations recently. After watching the video, I just thought about microservices and IaC for a while and thought "Ha, we're really fucking this up, aren't we?"


Which video?


It wasn't a video pitching the language and certainly not drawing any explicit parallels to modern practices. Just something that got me thinking about the distributed concurrency system of erlang for the first time in a decade.

Here is a conference talk that is a pitch, if that's what you're looking for: https://www.youtube.com/watch?v=cNICGEwmXLU


Thanks!


This seems... not the same as Erlang at all? Erlang solves the problem of persistent state by essentially eliminating it: more or less all state exists in message queues, or in an external database or something. Flawless seems to solve that problem with a technique similar to, but not quite the same as, filesystem journaling: take note of your side effects when performing them. In the case of filesystem journaling it's so that you can redo them if you crash, but in this case it's so that you don't need to.

It's not entirely clear to me how well this would work in which domains, but the overlap with domains where Erlang works well seems less than total.



In that case, all the informally-specified bug-ridden program needs to be is fast to beat expectations.


While it might have tons of bugs, this one seems better; programming stuff as you normally would and have it distributed and durable automagically. At least that's what I got from it. And that's nice.


This is how people often describe Erlang and Elixir. But I guess it's fair to say that functional programming is not "programming as you usually would" for most!


"This is how people often describe Erlang and Elixir."

It is. I recall it fooling me when I first got into Erlang. But it's wrong. Erlang has some tools that help lead you in a more robust direction, but you still have to work to use them. They are not automatic and it's trivial to write an Erlang service that has a single point of failure, even accidentally.

Now, I don't want to criticize the tools it has and the fact that it does rather strongly lead you in a robust direction more than many systems. This is somewhere between "an easy mistake to make for a newbie" and "some sloppy advocacy sometimes by some people", 100% a people thing, not a criticism of the code base or technicals at all. But it is important for people considering Erlang/Elixir or early in the process to understand that it does not simply automatically make all your code run robustly in a cluster. It is only a strong push in the right direction and you still must understand it enough to make sure you don't break it.


Well, with Erlang you really have to plan these things and implement those. Functionality (pure/immutable more-so) helps, but it doesn't pick up everything as in, it's not drop-in; it's very possible, but you have to change habits, even if you were a functional programmer before.


Rust isn't exactly the normal style either unless you were used to some subset of modern c++ that probably only gets used in gaming or embedded.

While I wouldn't normally argue for rigor or anything tedious, creating computationally expensive abstraction layers to let you develop capabilities faster without having to worry about defects or edge cases seems the opposite of what rust is designed for. I think the language will fight you the whole way. The only aid I can think of is that you can wildly unwrap stuff everywhere not worrying about panics. Which does save some time.


> Rust isn't exactly the normal style either unless you were used to some subset of modern c++ that probably only gets used in gaming or embedded.

But, as I understand it, it works with 'normal Rust', which, if you enjoy Rust, would be normal right? It seems to compile the rust to wasm, run it in wasm, log the results while keeping pace of how far it came, and then rerun it with previous results. That's not quite nice to have, but, like you say I think, it has foot guns, and, one of the worst, is getting too comfortable thinking it has your back while this is not enough; side effects behind immutable variables are there and while this helps you a bit, it doesn't fix any of that and maybe makes it worse.

I am just a little annoyed this is not the norm but rather something special (and closed source?), why is it not everywhere? Probably because high amounts of magic?


Very cool, and the approach demonstrated might be of interest to a similar problem we have in Ambient (our WASM game runtime that has competing processes that may need to retry interactions.)

That being said - what’s the relation to Lunatic [0]? Are you still working on Lunatic? Is this a side project? Or is it something completely separate?

[0]: https://lunatic.solutions/


Ah, good observation. This is by Bernard Kolobara the CEO and Co-founder of Lunatic.

https://kolobara.com


He (Bernard) hasn’t made any commit to lunatic in half a year where before that he was regularly contributing so one can only presume he took what he learned working on lunatic and started a project behind the closed doors


If anything, he made strides in the naming department :-)


> Imagine if you could just start an arbitrary computation and the system guarantees that it will run until completion and all the operations will be performed exactly once.

How is this guaranteed? Isn't exactly once delivery in a distributed system impossible?


Exactly once processing is possible with idempotence keys over an at least once delivery system.


You've just pushed the problem down to the system that checks idempotence keys.


If you're referring to the CAP theorem, it needs some clarification: https://www.infoq.com/articles/cap-twelve-years-later-how-th...


It's not impossible. You just pick CP from the CAP theorem (and give up availability).

i.e., the message will be delivered exactly once if the system makes progress.

If you want an existence proof, NFSv3 had this working back in the 1980's. I doubt it was the first.


It does say, _Imagine_. Not sure what you're quoting, I don't see this text in TFA and your comment is not a child of another.


Recently, another alternative was teased out: https://www.golem.cloud/

From the folks at Ziverge, who've worked on ZIO in Scala.

They use a similar approach I believe. It's discussed in this podcast: https://podcasters.spotify.com/pod/show/happypathprogramming...


Was thinking exactly the same!


Doesn't look to be open source, only source available. I wonder what the plans for Flawless are in this regard.


If I understand correctly, Flawless provides the WASM runtime. If that’s the case why can’t it be entirely hidden to the user? The system provides things like entropy and networking.


The problem is the system on the other end usually has timing/causality expectations (e.g., in the case of reentrant calls), and that system is not usually part of your system (e.g., they wouldn't include whatever monotonically increasing counter your system would require to ensure causal consistency).


probably some subtle technical details

like "just" WASM as a target doesn't have many of the APIs build in without non trivial "magic" trics

and for WASI might be tricky to get the exact form of determinism they choose right

like they might need to provide a custom standard library which is viable but not very convenient to use and even less convenient to maintain (if it's just about custom build flags for the "upstream" standard library it's on the other hand quite easy but likely not good enough)

through that's purely speculative

but they might be lucky there is quite a bit of benefit for rust to have a deterministic wasm build target (for wasm based derives which if combined with cryptographic hashing of inputs form a potential building block for having a shared (potential public) derive binary artifact cache speeding up CI systems and similar and allowing more complicated/advanced derive usage without having to worry about the derive compliation time adding to the initial build time)


how is something like this implemented ? does this hook into the rust compiler ? or does the rust/wasm compiler architecture provide an intermediate step that allows for these kinds of systems to be built on top ?


Wasm itself provides some determinism constraints[1] as an explicit design goal, which is what enables things like this.

I made a very simple snapshottable compute executer for a talk last year and it took surprisingly little code if you want a look at how it works[2], but the complexity of something like Flawless is that doing anything useful involves communicating with the outside world, where non-determinism can easily sneak in. I've been following Bernard's work on Lunatic so I'm incredibly excited to see he's tackling these hard problems.

[1] https://github.com/WebAssembly/design/blob/main/Nondetermini...

[2] https://github.com/drifting-in-space/wasmbox/tree/main


this is very cool! but just to be clear - wasm allows for someone to tap into variable declarations, etc ? which is what flawless is doing. that's almost parse tree right ?


I'm not 100% sure how Flawless works, but you don't need to tap into variable declarations if you can just snapshot the entire stack and heap, so I think that's what it does.


Reminds me of Dfinity without the crypto part. The killer app of Dfinity was never the crypto after all.


How different is this from Temporal?


Besides the obvious difference in execution engine which is explained in the link, I noticed the same thing and after doing some research, I found out that Temporal cofounder Samar Abbas was the creator of Amazon Flow Framework in 2009 and then the creator of the Azure Durable Task Framework in 2014, then moved on to Uber and co-created Candence with Maxim Fateev before they both eventually founded Temporal in 2019. https://www.temporal.io/about


Here's a little more context: https://temporal.io/blog/samars-journey


This looks very cool. I expect you’ll still want the services you talk to to be idempotent, but Flawless still takes a big chunk out of the work - and it seems very flexible too!


I have wanted something similar for Python: where the execution of a function will be interrupted (i.e. via a dedicated exception) and then I can rerun it to the very same point it previously halted while none of prior side effects occur and the previous state within that function gets restored.

While I have an idea how to implement it, now after having read the article and comments here, how is this concept called? Does an implementation for Python exist already?


Depending on what you're trying to do, you likely want either continuations (which aren't supported in Python directly, but which you can emulate by writing your function in continuation-passing style) or a debugger with rewinding (which is fairly cutting edge, but might exist for Python? PyPy blogged about an early version a few years back). The main use cases for a specific point interrupt are covered by these techniques.

That said, Temporal is very similar and supports multiple languages, including Python.


It's probably overkill depending on your use-case, but temporal has a python client: https://github.com/temporalio/sdk-python



Thanks, that would solve it! Their „workflows“ provide a nice abstraction. Though I’d rather like to not include an external service.

Imagine a simpler implementation for me where any earlier called subfunction would simply return the previous result, up to the point where the function was previously interrupted. Therefore these previous return values need to be stored (in what temporal calls a workflow), i.e. in a database. Sounds simple enough and decorators would probably help designing a good abstraction and keeping everything self-documenting and simple enough to understand.


Depending on what you need, but thinking for a bit in wider context then something like Prefect might be what you want.


The boring way to do guaranteed execution with retries is to store the work items in Postgres and use a pool of workers. The workers store intermediate results in a cache.

Such systems need tooling for diagnosing and fixing problems: metrics, logging, dead-letter queue, inspecting and evicting cached items, retrying dead jobs, adjusting worker settings. Flawless and similar systems will have the same problems and need the same tools.


Only rust and web assembly seems very restrictive, what does it brings to the table vs temporal for example which work on any language and is rock solid?


If it used anything else than Rust, they couldn't call if "Flawless"...

(SCNR)


This approach should also work well for Haskell where "main" is already a pure computation producing a description of what IO actions to take.


How does this handle the machine running out of memory?


The “restart from where it failed”-aspect was a big reason for why I made Mats3. It is message-based, async, transactional, staged stateless services, or message-oriented asynchronous RPC. Due to the transactionality, and the “state lives on the wire”, if a flow fails, it can be restarted from where it left off.

https://mats3.io


This would be perfect for money transfers. A big part of money transfer reliability is properly resuming from errors.


I’ve seen a couple of these now. Looks like a lot of them take the “we will take your code and compile it”-approach.

That’s not a problem per se but it does affect the ability to debug and see relevant stack traces. Its like how sometimes you see transpiled JS when what you really want is the typescript using source maps.


Very tenuously related, but the idea of logging all nondeterministic effects for idempotence can also be used to make locks lock-free!

https://arxiv.org/abs/2201.00813


This deterministic execution pattern reminds me a lot of the approach with Azure Durable Functions and I know I've seen it in other places as well. Curious if anyone knows where that pattern originated.


This is getting interesting! Samar from Temporal was the creator of the Azure Durable Task Framework at Microsoft.


Seeing mentions of Temporal and Erlang. Got some research to do.


Is this an explicit feature of the wasm runtime in use here, or might this break with future optimizations in the runtime and the introduction of threads and other similar features?


I was thinking of something like this for running queries for sharded data: the execution could be paused and moved to the server that is closer to the data source at a given runtime.

> Notice how flawless takes away the burden of persisting the state.

Reminds me of the Wisdom of the Well: "You'll never find a programming language that frees you from the burden of clarifying your ideas." [1]

Although solving durability of execution for arbitrary code sounds super cool, I suspect that writing the code in a way that could be checkpointed/resumed more naturally would probably be easier to debug and implement, in the end.

--

1: https://xkcd.com/568/


I’ve found it surprising that none of the wasm runtimes I’ve seen support dumping/restoring memory state. Would be very valuable.


> If you would like to be one of the first developers to get access to flawless as part of our private alpha. Join our waitlist.


smells like java and the jvm


check out Fermyon


linky: https://github.com/fermyon/spin#readme (Apache 2; and while I don't see any CLA, interestingly they do require GPG signed commits: https://developer.fermyon.com/spin/contributing-spin#committ... )


requiring signed commits is awesome


This site is great




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: