Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Fil-C is slow.

There is no C or C++ memory safe compiler with acceptable performance for kernels, rendering, games, etc. For that you need Rust.

The future includes Fil-C for legacy code that isn’t performance sensitive and Rust for new code that is.



No, Rust is awful for game development. It's not really what it was intended for. For one, all the graphics API are in C, so you would have to use unsafe FFI basically everywhere.


How slow? In some contexts, the trade-off might be acceptable. From what I've seen in pizlonator's tweets, in some cases the difference in speed didn't seem drastic to me.


Yeah, I would happily run a bunch of my network services in this. I have loads of services that are public-facing doing a lot of complex parsing and rule evaluation and are mostly idle. For example my whole mailserver stack could probably benefit from this. My few messages an hour can run 2x slower. Maybe I would leave dovecot native since the attack surface before authentication is much lower and the performance difference would be more noticeable (mostly for things like searches).


You may be aware that one of the things Bernstein is famous for is revolutionizing mailserver security.


I imagine Apt is usually IO constrained?


That's my guess, yeah

Also, Fil-C's overheads are the lowest for programs that are pushing primitive bits around.

Fil-C's overheads are the highest for programs that chase pointers.

I'm guessing the CPU bound bits of apt (if there are any) are more of the former


What does that have to do with apt?


Enough of it is performance sensitive that Fil-C is not an option.

Fil-C is useful for the long tail of C/C++ that no one will bother to rewrite and is still usable if slow.


How is apt performance sensitive?


Apt has been painfully slow since I started using Debian last millennium, but I suspect it's not because it uses a lot of CPU, or it would be snappy by now.


It parses formats and does TLS, I’m assuming it’d be quite bad. I don’t think you can mix and match.


stuff that talks to "the internet" and runs as "root" seems like a good thing to build with filc.


It probably uses OS sandboxing primitives already.


In normal operation, apt has to be able to upgrade the kernel, the bootloader, and libc, so it can't usefully be sandboxed except for testing or chroots.


No, that doesn't follow. That only means the networking and parsing functions can't be sandboxed in the same process that drops new root-owned files. C and C++ services have been using subprocesses for sandboxing risky functionality for a long time now. It appears Apt has some version of this:

https://salsa.debian.org/apt-team/apt/-/blob/main/apt-pkg/co...


That's true; you can't usefully sandbox apt as a whole, but, because it verifies the signatures of the packages it downloads, you could usefully sandbox the downloading process, and you could avoid doing any parsing on the package file until you've validated its signature. It's a pleasant surprise to hear that it already does something like this!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: