Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Head in the Zed Cloud (maxdeviant.com)
104 points by todsacerdoti 25 days ago | hide | past | favorite | 47 comments


The attempts at collaborative tools in Zed was always far more interesting to me than the AI stuff. Don't get me wrong, their AI stuff is nice and works well for me, but it's hardly necessary in an editor with how good Claude Code and others are.

But the times I've used the collaboration tooling in Zed have been really excellent. It just sucks it's not getting much attention recently. In particular I'd really like to see some movement on something that works across multiple different editors on this front.

I'm glad to hear they're still thinking about these kind of features.


The thing that made me go "oh damn" was finding out the debugger is multiplayer.


What does it mean the debugger is multiplayer?


Someone can debug your program over a remote session with no extra configuration.


Yeah, I am also glad that they are not exclusive about how you use AI which is what makes it better. They need to stop marketing the AI stuff it puts off some people. They need to advertise how versatile they are.


The choice to go to WebAssembly is an interesting one.

WASM3, especially (released just 2 months ago), is really gunning for a more general-purpose "assembly for everywhere" status (not just "compile to web"), and it looks like it's accomplishing that.

I hope they add some POSIXy stuff to it so I can write cross-platform commandline TUI's that do useful things without needing to be recompiled on different OS/chip combos (at the cost of a 10-20% reduction from native compilation- not a critical loss for all but the most important use-cases) and are likely to simply keep working on all future OS/chip combos (assuming you can run the wasm, of course)


> I hope they add some POSIXy stuff to it

Are you aware of WASI? WASI preview 1 provides a portable POSIXy interfance, while WASI preview 2 is a more complex platform abstraction beast.

(Keeping the platform separate from the assembly is normal and good - but having a common denominator platform like POSIX is also useful).


I'd go a bit further. If you want full POSIX support, perhaps WASIX is the best alternative. It's WASI preview 1 + many missing features, such as: threads, fork, exec, dlopen, dlsym, longjmp, setjmp, ...

https://wasix.org/


My understanding of the wasm execution model was that it was fundamentally single threaded?


I don't think that's accurate, although it's true that needs extra work to work properly in JS based environments.

You can already create threads in Wasm environments (we got even fork working in WASIX!). However, there is an upcoming Wasm proposal that adds threads support natively to the spec: https://github.com/WebAssembly/shared-everything-threads


what are the options regarding working with wasix? (compiling to it, running it?)

is this something that is expected to "one day" be part of WASM proper in some form?


Right now you should be good to go to start using WASIX.

If you want to compile threaded code, things should already work (without waiting for any proposal in the Wasm space). If you want to run it, there are few options: use wasmer-js for the browser (Wasmer using the Browser Wasm engine + WASIX) or using normal Wasmer to run it server-side.

No need to wait for the Wasm "proper" implementation. Things should already be runnable with no major issues.


How is Rust + Web Assembly + Cloudflare workers in pricing and performance compared to say deploying Rust-based Docker images on Google Cloud Run or AWS Fargate?


Rust on CF Workers is horrible. >10x performance hit (compared to JS) for a non-trivial web app, and it's not only a 10x performance hit but 10x the cost since they charge for CPU time, and that's where the extra time is going.

Realistically for a low traffic app it's fine, but it really makes you question how badly you want to be writing Rust.

As far as I can tell, the problem stems from the fact that CF Workers is still V8 - it's just a web browser as a server. A Rust app in this environment has to compile the whole stdlib and include it in the payload, whereas a JS app is just the JS you wrote (and the libs you pulled in). Then the JS gets to use V8's data structures and JSON parsing which is faster than the wasm-compiled Rust equivalents.

At least this is what I ran into when I tried a serious project on CF Workers with Rust. I tried going full Cloudflare but eventually migrated to AWS Lambda where the Rust code boots fast and runs natively.


I thought WASM was no_std since there's no built in allocator?

Regardless, not sure why a Rust engineer would choose this path. The whole point to writing a service in Rust is that you would trade 10x time build complexity and developer ovearhead for getting a service that can run in a low memory, low CPU VM. Seems like the wrong tools for the job.


> Seems like the wrong tools for the job.

Thanks for the confirmation. I was confused as well. I always thought that the real use of WASM is to run exotic native binaries in a browser, for example, running Tesseract (for OCR) in the browser.


I think performance takes a hit due to WASM, and I imagine pricing is worse at big qps numbers (where you can saturate instances), but I've found that deploying on CF workers is great for little-to-no devops burden. Scales up/down arbitrarily, pretty reasonable set of managed services, no cold start times to deal with, etc.

Only issue is that some of the managed services are still pretty half-baked, and introduce insane latency into things that should not be slow. KV checks/DB queries through their services can be double-to-triple digit ms latencies depending on configs.


The performance hit is less because of WASM but because the Workers platform is fundamentally defined in terms of Javascript and WASM is just a thing the JS engine has as a feature, so everything has to be proxied through JS objects and code, serialized into byte arrays, handed to the WASM, and same story in reverse.

We need WASM-native interfaces to get common to get rid of JS.


I ended up using container service on azure for a small rust project that I built in a docker container and published to GitHub. GitHub actions publishes to the azure service and in the 3 years I have been running it, it's basically been almost entirely free.


I have a similar experience except I use Go+GitHub Actions+Google Cloud Run.


Been using CF Workers with JavaScript and I absolutely love it.

What is performance overhead when comparing rust against wasm?

Also think the time for a FOSS alternative is coming. Serverless without, virtually, cold starts is here to stay but being tied to only 1 vendor is problematic.


> Also think the time for a FOSS alternative is coming. Serverless without, virtually, cold starts is here to stay but being tied to only 1 vendor is problematic.

Supabase Edge Functions runs on the same V8 isolate primitive as Cloudflare Workers and is fully open-source (https://github.com/supabase/edge-runtime). We use the Deno runtime, which supports Node built-in APIs, npm packages, and WebAssembly (WASM) modules. (disclaimer: I'm the lead for Supabase Edge Functions)


It would be interesting if Supabase allows me to use that runtime without forcing me to use supabase, being a separated product on its own.

Several years ago, I used MeteorJs, it uses mongo and it is somehow comparable to Supabase. The main issue that burned me and several projects was that It was hard/even impossible to bring different libraries, it was a full stack solution that did not evolved well, it was great for prototyping until it became unsustainable and even hard to on board new devs due to “separating of concerns” mostly due to the big learning curve of one big framework.

Having learn for this, I only build apps where I can bring whatever library I want. I need tool/library/frameworks to as agnostic as possible.

The thing I love about CloudFlare workers is that you are not force to use any other CF service, I have full control of the code, I combine it with HonoJs and I can deploy it as a server or serverless.

About the runtimes: Having to choose between node, demo and bun is something that I do not want to do, I’m sticking with node and hopefully the runtimes would be compatible with standard JavaScript.


>It would be interesting if Supabase allows me to use that runtime without forcing me to use supabase, being a separated product on its own.

It's possible for you to self-host Edge Runtime on its own. Check the repo, it has Docker files and an example setup.

> I have full control of the code, I combine it with HonoJs and I can deploy it as a server or serverless.

Even with Supabase's hosted option, you can choose to run Edge Functions and opt out of others. You can run Hono in Edge Functions, meaning you can easily switch between CF Workers and Supabase Edge Functions (and vice versa) https://supabase.com/docs/guides/functions/routing?queryGrou...

> Having to choose between node, demo and bun is something that I do not want to do, I’m sticking with node and hopefully the runtimes would be compatible with standard JavaScript.

Deno supports most of Node built-in API and npm packages. If your app uses modern Node it can be deployed on Edge Functions without having to worry about the runtime (having said that, I agree there are quirks and we are working on native Node support as well).


Cool, I'll check it out.


It surely depends on your use case. Testing my Ricochet Robots solver (https://ricochetrobots.kevincox.ca/) which is pure computation with effectively no IO the speed is basically indistinguishable. Some runs the WASM is faster sometimes the native is faster. On average the native is definitely faster but it is surprisingly within the noise.

Last time I compared (about 8 years ago) WASM was closer to double the runtime. So things have definitely improved. (I had to check a handful of times that I was compiling with the correct optimizations in both cases.)


The stats I've seen show a 10-20% loss in speed relative to natively-compiled, which is effectively noise for all but the most critical paths.

It may get even closer with WASM3, released 2 months ago, since it has things like 64 bit address support, more flexible vector instructions, typed references (which remove runtime safety checks), basic GC, etc. https://webassembly.org/news/2025-09-17-wasm-3.0/


Unfortunately 64bit address suppport does the opposite, that comes with a non-trivial performance penalty because it breaks the tricks that were used to minimize sandboxing overhead in 32bit mode.

https://spidermonkey.dev/blog/2025/01/15/is-memory64-actuall...


1) This may be temporary.

2) The bounds checking argument is a problem, I guess?

3) This article makes no mention of type-checking, which is also a new feature, which moves some checks that normally only run at runtime to only needing to be checked once at compile time, and this may include bounds-style checks


The Cloudflare Workers runtime is open source: https://github.com/cloudflare/workerd

People can and do use this to run Workers on hosting providers other than Cloudflare.


It's also worth noting that workerd is only a part of the Cloudflare Workers stack. It doesn't have the same security properties.

https://github.com/cloudflare/workerd#warning-workerd-is-not...

(I know you know this, but frankly you should add a disclaimer when you comment about CF or Capnp. It's too convenient for you to leave out the cons.)


Job scheduling and tenant sandboxing are generally the responsibility of the hosting provider, not the JS runtime. If you are going to run workerd on, say, Lambda, then you rely on Lambda for these things, not workerd. No other server JS runtime offers hardened sandboxing either -- they all defer to the hosting provider.

(Though if we assume no zero-days in V8, then workerd as-is actually does provide strong sandboxing, at least as strong as (arguably stronger than) any other JS runtime. Unfortunately, V8 does in fact have zero-days, quite often.)

What mariopt said above was: "being tied to only 1 vendor is problematic." My point here is that when you build on Workers, you are not tied to one provider, because you can run workerd anywhere. And we do, in fact, have former customers who have migrated off Cloudflare by running workerd on other providers.

> frankly you should add a disclaimer when you comment about CF or Capnp

I usually do. Sometimes I forget. But my name and affiliation is easily discovered by clicking my profile. I note that yours is not.


I think it's pretty well understood that Cloudflare does not actually deploy a VM/container/etc per tenant, but you guys are relying on something like detecting bad behavior and isolating or punishing tenants that try to use attacks in the style of rowhammer: https://developers.cloudflare.com/workers/reference/security... -- so there is secret sauce that is not part of workerd, and one cannot get the same platform as open source.

Meanwhile, somebody like Supabase is making the claim that what you see as open source is what they run, and Deno says their proprietary stuff is KV store and such, not the core offering.

Now, do these vendors have worse security, by trusting the V8 isolates more? Probably. But clearly Cloudflare Workers are a lot more integrated than just "run workerd and that's it" -- which is the base Supabase sales pitch, with Postgrest, their "Realtime" WAL follower, etc.

(I am not affiliated with any of the players in this space; I have burned a few fingers trying to use Cloudflare Workers, especially in any advanced setup or with Rust. You have open, valid, detailed, reproducible, bug reports from me.)


I am not very familiar with Supabase edge functions, but it appears to be based on Deno. According to Deno's documentation, it does not implement hardening against runtime exploits, instead recommending that you set that up separately:

https://docs.deno.com/runtime/fundamentals/security/#executi...

The intro blog post for Supabase edge functions appears to hint that, in production, they use Deno Deploy subhosting: https://supabase.com/blog/edge-runtime-self-hosted-deno-func...

Note that Deno Deploy is a hosting service run by Deno-the-company. My understanding is that they have proprietary components of their hosting infrastructure just like we do. But disclaimer: I haven't looked super-closely, maybe I'm wrong.

But yes, it's true that we don't use containers, instead we've optimized our hosting specifically for isolates as used in workerd, which allows us to run more efficiently and thus deploy every app globally with better pricing than competitors who only deploy to one region. Yes, how we do that is proprietary, just like the scheduling systems of most/all other cloud providers are also proprietary.

But how does that make anyone "tied to one vendor"?


> But how does that make anyone "tied to one vendor"?

Because you can't, in the general case, recreate the setup on a different platform? That's like the definition of that expression.

BTW here's Deno saying Deno Deploy is process-per-deployment with seccomp. No idea if that's always true, but I'd expect them to boast about it if they were doing something different. https://deno.com/blog/anatomy-isolate-cloud

Process-per-deployment is something you can reasonably recreate on top of K8S or whatever for self-hosting. And there's always KNative. Note that in that setting scheduling and tenant sandboxing are not the responsibility of the hosting provider.

Personally, I haven't really felt that cold starts are a major problem when I control my stack, don't compile Javascript at startup, can leave 1 instance idling, and so on. Which is why I'm pretty much ok with the "containers serving HTTP" stereotype for many things, when that lets me move them between providers with minimal trouble. Especially considering the pain I've felt with pretty much every "one vendor" stack, hitting every edge case branch on my way falling down the stack of abstractions. I've very very much tried to use Durable Objects over and over and keep coming back to serving HTTP with Rust or Typescript, using Postgres or SQLite.

Pretending you don't see the whole argument for why people want the option of self-hosting the whole real thing really comes across as the cliched "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"


> BTW here's Deno saying Deno Deploy is process-per-deployment with seccomp.

And that part isn't open source, AFAICT.

> Because you can't, in the general case, recreate the setup on a different platform?

You also can't recreate Lambda on Google Cloud since Lambda's scheduler is not open source.

But you can use Google Cloud Functions instead.

None of these schedulers are open source. Not Deno Deploy, not Supabase, and yeah, not ours either. Standard practice here is to offer an open source local runtime that can be used with other schedulers, but not to open source the cloud scheduler itself.

> Pretending you don't see the whole argument for why people want the option of self-hosting the whole real thing

Yes I get that everyone would like to have full control over their whole stack and would like to have it for free, because of course, why wouldn't you? I like those things too!

But we're a business, we gotta use our competitive advantage to make money.

The argument that I felt mariopt was making, when they said "being tied to only 1 vendor is problematic", is that some proprietary technology locks you in when you use it. Like if you build a large application in a proprietary programming language, then the vendor jacks up the prices, you are stuck. All I'm saying is that's not the case here: we've open sourced the parts needed so that you can switch vendors. The other vendor might not be as fast and cheap as us, but they will be just as fast and cheap as they'd have been if you had never used us in the first place.

I will also note, if we actually open sourced the tech, I think you'd find it not as useful as you imagine. It's really designed for running a whole multi-tenant hosting service (across a globally distributed network) and would be massive overkill for just hosting your own code. workerd is actually better for that.

> Durable Objects

I want to be forthright and admit my argument doesn't currently hold up here. workerd's implementation of Durable Objects doesn't scale at all, so can't plausibly be used in production. We actually have some plans to fix this.


Workers is a v8 isolates runtime like Deno. v8 and Deno are both open source and Deno is used in a variety of platforms, including Supabase and ValTown.

It is a terrific technology, and it is reasonably portable but I think you would be better using it in something like Supabase where are the whole platform is open source and portable, if those are your goals.


In code I’ve worked on, cold starts on AWS lambda for a rust binary that handled nontrivial requests was around 30ms.

At that point it doesn’t really matter if it’s cold start or not.


Workerd is already open source so that's a good start.


Post is a bit sparse on details and seems to be more about the backend than the infra itself. Would be interested to hear more.


I wish Zed would implement support for Jupyter notebooks first. Maybe this sounds like a thing I can contribute


I migrated to using the # %% syntax in plain .py files.

For me, it's a superior experience anyway. I also prefer it in editors that support both (like VS code).

You can run the REPL with a Jupiter kernel as well.

https://zed.dev/docs/repl#cell-mode


It’s coming, there is already basic support for Jupyter kernels https://zed.dev/docs/repl


Is the dependancy on Cloudflare worth the saved time in infrastructure? Getting a big bare metal and deploying a docker should go a long way.

This implementation sounds fully dependant on a service that Zed has little to say about.


FYI: Cloudflare provides an open source version of their Workers runtime[0], so the lock-in isn't as strong as it once was.

[0]: https://github.com/cloudflare/workerd


I think if the end game is to run workers runtime then they could also run something else from the start.

Its gonna be hard to compete with the scaling cloudflare offers if they migrate to their own dedicated infra, but it of course would become much cheaper than paying per request


i didn't realize the cloud side of an editor had grown to ~70k lines of Rust already… and this work is laying the foundation for collaborative coding with DeltaDB.

BUT it's worth noting that WebAssembly still has some performance overhead compared to native, the article chooses convenience and portability over raw speed, which might be fine for an editor backend.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: