Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Been using CF Workers with JavaScript and I absolutely love it.

What is performance overhead when comparing rust against wasm?

Also think the time for a FOSS alternative is coming. Serverless without, virtually, cold starts is here to stay but being tied to only 1 vendor is problematic.



> Also think the time for a FOSS alternative is coming. Serverless without, virtually, cold starts is here to stay but being tied to only 1 vendor is problematic.

Supabase Edge Functions runs on the same V8 isolate primitive as Cloudflare Workers and is fully open-source (https://github.com/supabase/edge-runtime). We use the Deno runtime, which supports Node built-in APIs, npm packages, and WebAssembly (WASM) modules. (disclaimer: I'm the lead for Supabase Edge Functions)


It would be interesting if Supabase allows me to use that runtime without forcing me to use supabase, being a separated product on its own.

Several years ago, I used MeteorJs, it uses mongo and it is somehow comparable to Supabase. The main issue that burned me and several projects was that It was hard/even impossible to bring different libraries, it was a full stack solution that did not evolved well, it was great for prototyping until it became unsustainable and even hard to on board new devs due to “separating of concerns” mostly due to the big learning curve of one big framework.

Having learn for this, I only build apps where I can bring whatever library I want. I need tool/library/frameworks to as agnostic as possible.

The thing I love about CloudFlare workers is that you are not force to use any other CF service, I have full control of the code, I combine it with HonoJs and I can deploy it as a server or serverless.

About the runtimes: Having to choose between node, demo and bun is something that I do not want to do, I’m sticking with node and hopefully the runtimes would be compatible with standard JavaScript.


>It would be interesting if Supabase allows me to use that runtime without forcing me to use supabase, being a separated product on its own.

It's possible for you to self-host Edge Runtime on its own. Check the repo, it has Docker files and an example setup.

> I have full control of the code, I combine it with HonoJs and I can deploy it as a server or serverless.

Even with Supabase's hosted option, you can choose to run Edge Functions and opt out of others. You can run Hono in Edge Functions, meaning you can easily switch between CF Workers and Supabase Edge Functions (and vice versa) https://supabase.com/docs/guides/functions/routing?queryGrou...

> Having to choose between node, demo and bun is something that I do not want to do, I’m sticking with node and hopefully the runtimes would be compatible with standard JavaScript.

Deno supports most of Node built-in API and npm packages. If your app uses modern Node it can be deployed on Edge Functions without having to worry about the runtime (having said that, I agree there are quirks and we are working on native Node support as well).


Cool, I'll check it out.


It surely depends on your use case. Testing my Ricochet Robots solver (https://ricochetrobots.kevincox.ca/) which is pure computation with effectively no IO the speed is basically indistinguishable. Some runs the WASM is faster sometimes the native is faster. On average the native is definitely faster but it is surprisingly within the noise.

Last time I compared (about 8 years ago) WASM was closer to double the runtime. So things have definitely improved. (I had to check a handful of times that I was compiling with the correct optimizations in both cases.)


The stats I've seen show a 10-20% loss in speed relative to natively-compiled, which is effectively noise for all but the most critical paths.

It may get even closer with WASM3, released 2 months ago, since it has things like 64 bit address support, more flexible vector instructions, typed references (which remove runtime safety checks), basic GC, etc. https://webassembly.org/news/2025-09-17-wasm-3.0/


Unfortunately 64bit address suppport does the opposite, that comes with a non-trivial performance penalty because it breaks the tricks that were used to minimize sandboxing overhead in 32bit mode.

https://spidermonkey.dev/blog/2025/01/15/is-memory64-actuall...


1) This may be temporary.

2) The bounds checking argument is a problem, I guess?

3) This article makes no mention of type-checking, which is also a new feature, which moves some checks that normally only run at runtime to only needing to be checked once at compile time, and this may include bounds-style checks


The Cloudflare Workers runtime is open source: https://github.com/cloudflare/workerd

People can and do use this to run Workers on hosting providers other than Cloudflare.


It's also worth noting that workerd is only a part of the Cloudflare Workers stack. It doesn't have the same security properties.

https://github.com/cloudflare/workerd#warning-workerd-is-not...

(I know you know this, but frankly you should add a disclaimer when you comment about CF or Capnp. It's too convenient for you to leave out the cons.)


Job scheduling and tenant sandboxing are generally the responsibility of the hosting provider, not the JS runtime. If you are going to run workerd on, say, Lambda, then you rely on Lambda for these things, not workerd. No other server JS runtime offers hardened sandboxing either -- they all defer to the hosting provider.

(Though if we assume no zero-days in V8, then workerd as-is actually does provide strong sandboxing, at least as strong as (arguably stronger than) any other JS runtime. Unfortunately, V8 does in fact have zero-days, quite often.)

What mariopt said above was: "being tied to only 1 vendor is problematic." My point here is that when you build on Workers, you are not tied to one provider, because you can run workerd anywhere. And we do, in fact, have former customers who have migrated off Cloudflare by running workerd on other providers.

> frankly you should add a disclaimer when you comment about CF or Capnp

I usually do. Sometimes I forget. But my name and affiliation is easily discovered by clicking my profile. I note that yours is not.


I think it's pretty well understood that Cloudflare does not actually deploy a VM/container/etc per tenant, but you guys are relying on something like detecting bad behavior and isolating or punishing tenants that try to use attacks in the style of rowhammer: https://developers.cloudflare.com/workers/reference/security... -- so there is secret sauce that is not part of workerd, and one cannot get the same platform as open source.

Meanwhile, somebody like Supabase is making the claim that what you see as open source is what they run, and Deno says their proprietary stuff is KV store and such, not the core offering.

Now, do these vendors have worse security, by trusting the V8 isolates more? Probably. But clearly Cloudflare Workers are a lot more integrated than just "run workerd and that's it" -- which is the base Supabase sales pitch, with Postgrest, their "Realtime" WAL follower, etc.

(I am not affiliated with any of the players in this space; I have burned a few fingers trying to use Cloudflare Workers, especially in any advanced setup or with Rust. You have open, valid, detailed, reproducible, bug reports from me.)


I am not very familiar with Supabase edge functions, but it appears to be based on Deno. According to Deno's documentation, it does not implement hardening against runtime exploits, instead recommending that you set that up separately:

https://docs.deno.com/runtime/fundamentals/security/#executi...

The intro blog post for Supabase edge functions appears to hint that, in production, they use Deno Deploy subhosting: https://supabase.com/blog/edge-runtime-self-hosted-deno-func...

Note that Deno Deploy is a hosting service run by Deno-the-company. My understanding is that they have proprietary components of their hosting infrastructure just like we do. But disclaimer: I haven't looked super-closely, maybe I'm wrong.

But yes, it's true that we don't use containers, instead we've optimized our hosting specifically for isolates as used in workerd, which allows us to run more efficiently and thus deploy every app globally with better pricing than competitors who only deploy to one region. Yes, how we do that is proprietary, just like the scheduling systems of most/all other cloud providers are also proprietary.

But how does that make anyone "tied to one vendor"?


> But how does that make anyone "tied to one vendor"?

Because you can't, in the general case, recreate the setup on a different platform? That's like the definition of that expression.

BTW here's Deno saying Deno Deploy is process-per-deployment with seccomp. No idea if that's always true, but I'd expect them to boast about it if they were doing something different. https://deno.com/blog/anatomy-isolate-cloud

Process-per-deployment is something you can reasonably recreate on top of K8S or whatever for self-hosting. And there's always KNative. Note that in that setting scheduling and tenant sandboxing are not the responsibility of the hosting provider.

Personally, I haven't really felt that cold starts are a major problem when I control my stack, don't compile Javascript at startup, can leave 1 instance idling, and so on. Which is why I'm pretty much ok with the "containers serving HTTP" stereotype for many things, when that lets me move them between providers with minimal trouble. Especially considering the pain I've felt with pretty much every "one vendor" stack, hitting every edge case branch on my way falling down the stack of abstractions. I've very very much tried to use Durable Objects over and over and keep coming back to serving HTTP with Rust or Typescript, using Postgres or SQLite.

Pretending you don't see the whole argument for why people want the option of self-hosting the whole real thing really comes across as the cliched "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"


> BTW here's Deno saying Deno Deploy is process-per-deployment with seccomp.

And that part isn't open source, AFAICT.

> Because you can't, in the general case, recreate the setup on a different platform?

You also can't recreate Lambda on Google Cloud since Lambda's scheduler is not open source.

But you can use Google Cloud Functions instead.

None of these schedulers are open source. Not Deno Deploy, not Supabase, and yeah, not ours either. Standard practice here is to offer an open source local runtime that can be used with other schedulers, but not to open source the cloud scheduler itself.

> Pretending you don't see the whole argument for why people want the option of self-hosting the whole real thing

Yes I get that everyone would like to have full control over their whole stack and would like to have it for free, because of course, why wouldn't you? I like those things too!

But we're a business, we gotta use our competitive advantage to make money.

The argument that I felt mariopt was making, when they said "being tied to only 1 vendor is problematic", is that some proprietary technology locks you in when you use it. Like if you build a large application in a proprietary programming language, then the vendor jacks up the prices, you are stuck. All I'm saying is that's not the case here: we've open sourced the parts needed so that you can switch vendors. The other vendor might not be as fast and cheap as us, but they will be just as fast and cheap as they'd have been if you had never used us in the first place.

I will also note, if we actually open sourced the tech, I think you'd find it not as useful as you imagine. It's really designed for running a whole multi-tenant hosting service (across a globally distributed network) and would be massive overkill for just hosting your own code. workerd is actually better for that.

> Durable Objects

I want to be forthright and admit my argument doesn't currently hold up here. workerd's implementation of Durable Objects doesn't scale at all, so can't plausibly be used in production. We actually have some plans to fix this.


Workers is a v8 isolates runtime like Deno. v8 and Deno are both open source and Deno is used in a variety of platforms, including Supabase and ValTown.

It is a terrific technology, and it is reasonably portable but I think you would be better using it in something like Supabase where are the whole platform is open source and portable, if those are your goals.


In code I’ve worked on, cold starts on AWS lambda for a rust binary that handled nontrivial requests was around 30ms.

At that point it doesn’t really matter if it’s cold start or not.


Workerd is already open source so that's a good start.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: