Passing around functions that have already been partially applied means that downstream users can't incorrectly call or accidentally modify the arguments which have already been applied. Whether this actually results in more safety is unclear, as there are other language features or API design that accomplishes the same thing.
Yeah I would think that downstream users would just expect a function that only takes in the params that they specify - leave it up to the API user to just make a closure on the spot.
I'm not sure if he's made an official statement about his versioning policy, but if you look at https://crates.io/users/dtolnay?sort=downloads, you can see that most of his crates follow the incremental patch versioning style.
> I found the discussion to be mostly very civilized and focused on finding solutions for those affected.
I really disagree with this characterization of the discussion. While there's plenty of more or less dispassionate comments focused on finding a solution, there's a greater number of people from the peanut gallery drowning that out with comments that amount to "I don't like this!" at best and questioning the integrity of dtonlay at worst. I feel like this demonstrates some of the worst features of open source software on github, where a bunch of nobodies feel the need to add their totally useless input rather than participating in good faith to find solutions and make the software better. You even see comments like that here, speculating that dtonlay is trying to break cargo and force people to use bazel, which is just an insanely conspiratorial and bad faith speculative interpretation that is obviously untrue and helps no one.
> While there's plenty of more or less dispassionate comments focused on finding a solution, there's a greater number of people from the peanut gallery drowning that out with comments that amount to "I don't like this!" at best and questioning the integrity of dtonlay at worst.
Only see comments doing that are on here.
The comments on the github issue are all pretty productive and constructive. The worst is the commentator that calls the approach 'despicable', but also provides constructive feedback and helps describe the problem.
Even if it was civil it was going to annoy the hell out of maintainer due to sheer number of notificiations. I was by accident in Chrome WEI commit, simply horrible number of notifications.
Also I'm not lying about (albeit hyperbolic) calls for dtolnay's head. From one of Rust Discords.
My remark was an observation based on the obvious public resentment, rather than necessarily an exhortation to do so. I promise you, I will not be so cagey and ambiguous when I actually call for someone's head on the chopping block.
you just still don't make sense in your replies to users. literally no one asked about discord or reddit, this conversation is about the GitHub discussion.
I didn't move the goalpost - it's how I interpreted the peanut gallery part.
But ok, let's say limiting yourself only to GitHub comments, you can still find doubt casting and uncharitable interpretations:
> Nice micro-optimization. And if you bury your head deep enough into the sand, doesn't sacrifice anything important either.
> "The only supported way ..." is a polite way of saying "Works for me, WONTFIX".
> Another interpretation of an earlier comment is that he is all-in on bazel/buck and does not care about cargo. Both interpretations have me concerned.
> And ~80 comments with "This doesn't work for me".
Even in this case, the __jem comments make sense. Many people are just writing their version, "I don't like this, " meaning any interesting message is drowned out by noise. THAT will make any maintainer limit to contributors.
Are you just making stuff up? Not one of those quotes are comments are on the github issue that was closed and which was being discussed in this thread: https://github.com/serde-rs/serde/issues/2538
I already referenced the worst comment and it falls closer to brutally honest than actually rude.
> Are you just making stuff up? Not one of those quotes are comments are on the github issue
It's customary to assume other side is arguing in good faith and not making stuff up. I assumed you read all the comments, had access to Github, so you wouldn't have problems finding them. But apparently that wasn't the case.
> I already referenced the worst comment and it falls closer to brutally honest than actually rude.
It's hard to take this at face value when you mistook my quotes for outright fabrication. You either didn't read them all, or you convienently forgot the examples I cited.
Also nice micro-optimization comments is dishonest, apparently dtolnay felt the optimizations provided for his use case were strong enough to warrant a fix.
People have complained about the build time of proc macros for ages in the community. This might be a misguided hack, but the response to this is bordering on a witch hunt, particularly when there is a glaring security hole (build.rs) that most people likely use without second thought every single day. I simply do not believe that most people commenting on this issue are auditing the builds of all their transitive dependencies.
Yeah, there are already binaries in the crates.io ecosystem, and I'm certain that almost none of these people have audited a `build.rs` file or a proc macro implementation which effectively runs as you, completely unsandboxed.
EDIT: I was wrong, this is not actually `watt` -- it may have been re-using code from the project.
This one of those pile-ons where everyone gets excited about having a cause-du-jour to feel passionate about, while simultaneously ignoring issues that are far more pressing.
You keep saying this but I suggest you actually look at the code. The precompiled binary is not a sandboxed WASM binary. Despite the name "watt" it has nothing to do with https://github.com/dtolnay/watt . `watt::bytecode` refers to the serialization protocol used by the proc macro shim and the precompiled binary to transfer the token stream over stdio, not anything related to WASM.
Also it's worth noting that even if it was a sandboxed binary ala https://github.com/dtolnay/watt , it's not obvious that distributions or users would be satisfied with that. For example Zig had this discussion with the own WASM blob compiler that they use as part of bootstrapping. https://news.ycombinator.com/item?id=33915321 . As I suggested there, distributions might be okay with building their own golden blobs that they maintain themselves instead of using upstream's, and that could even work in this Rust case for distributions that only care about a single copy of serde for compiling everything. But it's hard for the average user doing `cargo build` for their own projects with cargo registry in `~/.cargo` to do the same replacement.
A really nice (IMO) solution would be to build a wasm blob reproducibly and to ship the blob’s hash, along with a way to download the blob, as part of a release. Then distros could build the blob, confirm that the hash matches, and ship a package that is built from source and nonetheless bit-for-bit identical to the upstream binary.
> almost none of these people have audited a `build.rs` file or a proc macro implementation which effectively runs as you, completely unsandboxed
The biggest orgs tend to run all of their builds sandboxed, including what happens with build.rs. It's part of how they enforce dependency management day to day, but also helps protect against supply chain attacks.
So not everyone does, but enough people do that you can rely on their complaints for the more well trodden parts of the ecosystem.
> The biggest orgs tend to run all of their builds sandboxed, including what happens with build.rs. It's part of how they enforce dependency management day to day, but also helps protect against supply chain attacks.
Sandboxing doesn't completely prevent supply-chain attacks. You can avoid getting persistent malware on your CI machines, sure. Exfiltration of tokens and secrets during build? Maybe in a small handful of CI setups where admins have carefully split the source fetch, or limited CI network access to Nexus. Exfiltration of tokens and secrets and/or backdooring after the software has been deployed to production? No, build-time sandboxing doesn't help here at all.
On all developer machines as well? No. Very few big orgs do this and only for mission-critical stuff. Some very important ones have docker-based sandboxed workflows or SSH-to-sandboxed-cluster workflows, or air-gapped laptops but that's very, very rare (I worked in air-gapped environment for a bit and it was a massive pain).
Oh wow. I'd be very interested in hearing how they sandbox rust-analyzer. I found a discussion of supporting the analyzer itself by generating config files [1][2], but not how you can sandbox it.
That would be extremely useful as the analyzer is a pretty juicy target and also runs proc-macros/build.rs scripts.
Yeah everywhere I’ve been at back to 1995 does this for high security environments, and if we detected an issue we had security response teams that worked with maintainers and others to remediate.
For lower environments we generally used 3 month or more embargoes on precompiled stuff we couldn’t easily compile ourselves to mitigate some of the supply chain issues if we weren’t directly managing the chain of trust for the binary.
I'm quite ambivalent on this issue overall, but let me just point out:
build.rs absolutely is a glaring security hole in the sense you say, but compared to that, this is much worse. You can verify the build.rs code that you download (at least in theory, and some people in banks or distro packages probably actually do), but binaries are orders of magnitude more difficult to inspect, and with the current Rust build system pretty much irreproducible.
> build.rs absolutely is a glaring security hole in the sense you say, but compared to that, this is much worse. You can verify the build.rs code that you download
In theory you can compile your own blob, but you'll need musl and whatnot to make a universal Linux build. Code for making the blob is there in the repo.
build.rs is at best equal. It can access your locally available DB, and transmit your data.
The problem here is that nothing in that build is pinned. It builds with nightly, but doesn't define which version / date.
We also don't know how it's build. Ideally there is a Docker container out there that does just an import of source code and then builds. No apk install or apt install (you'd do that in a base published layer). Referenced with an SHA256.
We then use this Docker container to pull in the source code AND its dependencies based on a Cargo.lock. Which... isn't there. So we don't know the exact dependencies that went in.
(Even if there were a Cargo.lock, we need to make sure we actually respect it. I believe cargo install by default ignores the lock file and tries to get the latest version that matches).
build.rs is a source file that you can audit. A binary that has no reproducible build is not auditable even if anyone wanted to.
A single person does not audit all of their dependency tree, but many people do read the source code of some if not many of their dependencies, and as a community we can figure out when something is fishy, like in this case.
But when there are binaries involved, nobody can do anything.
This isn't the same as installing a signed binary from a linux package manager that has a checksum and a verified build system. It's a random binary blob someone made in a way that nobody else can check, and it's just "trust me bro there's nothing bad in it".
> and it's just "trust me bro there's nothing bad in it".
The developer should be very concerned about what happens if his system(s) are compromised and the attacker slips a backdoor into these binaries-- it will be difficult to impossible to convince people that the developer himself didn't do it intentionally. Their opacity and immediacy make them much more interesting targets for attack than the source itself (and its associated build scripts).
Saving a few seconds on the first compile on the some other developers computer hardly seems worth that risk.
And at the meta level, we should probably worry about the security practices of someone who isn't worrying about that risk-- what else aren't they worrying about?
At least Linux, OpenBSD, and (with more annoyance) Windows make it relatively straightforward to run things like build.rs in a sandbox. I wonder why Cargo doesn’t do this.
There was also a project (that dtolnay was involved in, I believe!) a few years ago to compile proc macros to wasm.
It’s probably true that most people commenting don’t audit the builds of transitive deps, but the original issue was a distro that couldn’t distribute precompiled binaries, I’m going to guess this has something to do with their license.
I think having an exit path for those that want to compile from source is important, and I can’t understand the reluctance to provide that.
Well, there is an exit path for those who want to compile from source. If you mean build from source for Cargo users, I believe there's issues with how feature flags interact with transitive dependencies that make this difficult. At least, there's comments on the issue that speak to this. Maybe someone more familiar with Cargo can chime in.
> However, tailscale focuses on access different devices to each other, while Narrowlink focuses on access to the services trough on the agent as a proxy.
Could not disagree more strongly that Datomic is well maintained. I'd view it as a significant liability in any organization using it without very good reason. My experience operationally supporting Datomic at even a moderate scale was a total nightmare and soured me on the Clojure ecosystem as a whole.
may i ask what you consider moderate scale in terms of daily/concurrent users? i’ve had no issue with small projects, but i’m curious to know where it starts falling over
Well, in the real world, the premise is false, but not because writing memory safe code is in fact easy, but because in aggregate in large code bases it's impossible.