Hacker Newsnew | past | comments | ask | show | jobs | submit | ohr's commentslogin

(Author here) yes! I also wrote about this (specifically in Python->Rust FFI) in another article:

https://ohadravid.github.io/posts/2023-03-rusty-python/#v2--...


(Author here) thanks! Even as an experienced Rust developer, this is a lot of syntax which is part of what makes it archaic I think.

As usual with Rust most of it is due to inherent complexity: for example, when you deal with raw pointers, you need to specify if they are const or mut, and you must have a syntax that’s not &, and not too wordy: so * is a good choice (might be changed to &raw in the future!). And if you want to say that something is generic and the generic parameter is a pointer… you just end up with a lot of syntax.


But if you let it run the tests, it’ll just do that on its own (or delete the tests, but usually it won’t), which is very useful and seems to match the article’s sentiment.


Test-driven development is back.

Since LLMs rely on patterns learned from large amounts of text, the results are relative. If the training data contained more Rust repos, that could explain why it feels stronger in Rust.

The way AI companies talk about "intelligence" now is shifting. They admit LLMs can't truly reason with the current architecture, so intelligence is being framed as the ability to solve problems using patterns learned from text, not reasoning on their own. That's a big downgrade from the original idea of AI reaching human-level thinking and developing AGI.

Also, my understanding is that since Microsoft invests in Copilot, it doesn't want ChatGPT to get better at coding. Instead, it wants it to get better at being a lawyer.


Used the Save Page WE Chrome extension to capture the html (after cleaning up with inspect element + delete), and added a bit of custom JavaScript to scroll everything to the right place. Needed the extension for the styling to be captured correctly.


(Author here) I'm a huge fan of the "How to speed up the Rust compiler" series! I was hoping to capture the same feeling :)


Having your last name be Ravid really is the icing on your cake.

Real is about the only other codec I see that could be a name, but nobody uses that anymore.


Do your part: name your kids "ffmpeg" and "vp-IX"!


I think that since the 1.5% one is only for aarch64 it's a bit unfair to claim the full number, more like 1/2 if you consider arm/x86 to be the majority of the (future) deployments


I suppose that’s fair, but I’d give credit for a 2.3% improvement in the test environment. For all we know it may be a net loss in other environments due to quirks (probably not, admittedly).


Thanks! Would be interesting to see if Rust/LLVM folks can get the compiler to apply this optimization whenever possible, as Rust can be much more accurate w.r.t memory initialization.


I think rust may be able to get it by adding a `freeze` intrinsic to the codegen here. that would force LLVM to pick a deterministic value if there was poison, and should thus unblock the optimization (which is fine here because we know the value isn't poison)


I think in this case Rust and C code aren't equivalent which maybe caused this slow down. Union trick also affects the alignment. C side struct is 32 bit aligned, but Rust struct only has 16bit alignment because it only contains fields with 16bit alignment. In practice the fields are likely anyway correctly aligned to 32bits, but compiler optimizations may have hard time verifying that.

Have you tried manually defining alignment of Rust struct?


Would be great, but wouldn't hold my breath for it. LLVM and Rustc can be both be kinda slow to stabilize.


It varies. New public APIs or language features may take a long time, but changes to internals and missed optimizations can be fixed in days or weeks, in both LLVM and Rust.


(Author here) Thanks! That's useful feedback.

I also agree - the final article isn't skim-friendly enough, which drives away some readers.


Glad you are open to feedback. My top question is: What kind of people do you want to read this and why?


In WMI, the fields are lazy loaded when you do a `*` query, but the real crate [does use the same Serde reflection tricks](https://github.com/ohadravid/wmi-rs/blob/main/src/query.rs#L...) to create the correct field list when you query a struct which improves perf a lot!


That's understandable, but I think it depends on how many different structs like this you have and how many fields you need to work with (for our usecase, we had tens of structs with tens of fields each).

There's also an Alternatives section in the article about other approaches that can achieve similar results, but of course 'do nothing' is also a valid option.

Edit: > If actually concerned about the need to know UI8 ..

Just a small note: even if you don't care about the fact that it's a UI8, you still have to use the correct type. For example, if the field happens to be returned as UI4, this code won't work!


Right, but isn't the struct definition equivalent in line count and effort compared to some typedefs and perhaps a handful of trivial-to-inspect oneline helper functions?

Regarding the UI8, don't you have to get your version's struct data member type correct to the exact same degree as a typedef in my suggestion?


> don't you have to get your version's struct data member type correct

No, since Serde will happyly fill a `u64` field with any other `u{8,16,32}` value, and even with signed types (as long as the actual value is non-negative) - this is sort of what happens when you deserialize a JSON `[1, 2, 3]` into `[u64]`.


Yes, but an equivalent to `impl<'de> Deserializer<'de> for ValueDeserializer` handles that. That could be a useful helper.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: