Indeed, this is exactly the type of subtle case you'd worry about when porting. Fuzzing would be unlikely to discover a bug that only occurs on giant inputs or needs a special configuration of lists.
In practice I think it works out okay because most of the time the LLM has written correct code, and when it doesn't it's introduced a dumb bug that's quickly fixed.
Of course, if the LLM introduces subtle bugs, that's even harder to deal with...
> most of the time the LLM has written correct code [...dumb bugs]
What domain do you work in?
I hope I'm just misusing the tool, but I don't think so (math+ML+AI background, able to make LLMs perform in other domains, able to make LLMs sing and dance for certain coding tasks, have seen other people struggle in the same ways I do trying to use LLMs for most coding tasks, haven't seen evidence of anyone doing better yet). On almost any problem where I'd be faster letting an LLM attempt it rather than just banging out a solution myself, it only comes close to being correct with intensive, lengthy prompting -- after much more effort than just typing the right thing in the first place. When it's wrong, the bugs often take more work to spot than to just write the right thing since you have to carefully scrutinize each line anyway while simultaneously reverse engineering the rationale for each decision (the API is structured and named such that you expect pagination to be handled automatically, but that's actually an additional requirement the caller must handle, leading to incomplete reads which look correct in prod ... till they aren't; when moving code from point A to point B it removes a critical safety check but the git diff is next to useless and you have to hand-review that sort of tedium and have to actually analyze every line instead of trusting the author when they say that a certain passage is a copy-paste job; it can't automatically pick up on the local style (even when explicitly prompted as to that style's purpose) and requires a hand-curated set of examples to figure out what a given comptime template should actually be doing, violating all sorts of invariants in the generated code, like running blocking syscalls inside an event loop implementation but using APIs which make doing so _look_ innocuous).
I've shipped a lot of (curated, modified) LLM code to prod, but I haven't yet seen a single model or wrapper around such models capable of generating nearly-correct code "most" of the time.
I don't doubt that's what you've actually observed though, so I'm passionately curious where the disconnect lies.
I might have phrased this unclearly, I meant specifically for the case of translating one symbol at a time from C to Rust. I certainly won't claim I've figured out any magic that makes the coding agents consistent!
Here you've got the advantage that you're repeating the same task over and over, so you can tweak your prompt as you go, and you've got the "spec" in the form of the C code there, so I think there's less to go wrong. It still did break things sometimes, but the fuzzing often caught it.
It does require careful prompting. In my first attempt Claude decided that some fields in the middle of an FFI struct weren't necessary. You can imagine the joy of trying to debug how a random pointer was changing to null after calling into a Rust routine that didn't even touch it. It was around then I knew the naive approach wasn't going to work.
In general I've found the agents to be a mixed bag, but overall positive if I use them in the right way. I find it works best for me if I used the agent as a sounding board to write down what I want to do anyway. I then have it write some tests for what should happen, and then I see how far it can go. If it's not doing something useful, I abort and just write things myself.
It does change your development flow a bit for sure. For instance, it's so much more important to concrete test cases to force the agent to get it right; as you mention, otherwise it's easy for it do something subtly broken.
For instance, I switched to tree-sitter from the clang API to do symbol parsing, and Claude wrote effectively all of it; in this case it was certainly much faster than writing it myself, even if I needed to poke it once or twice. This is sort of a perfect task for it though: I roughly knew what symbols should come out and in what order, so it was easy to validate the LLM was going in the right direction.
I've certainly had them go the other way, reporting back that "I removed all of the failing parts of the test, and thus the tests are passing, boss" more times than I'd like. I suspect the constrained environment again helped here, there's less wiggle room for the LLM to misinterpret the situation.
> Fuzzing would be unlikely to discover a bug that only occurs on giant inputs or needs a special configuration of lists.
I have a concern about peoples' over confidence in fuzz testing.
It's a great tool, sure, but all it is is something that selects (and tries) inputs at random from the set of all possible inputs that can be generated for the API.
For a strongly typed system that means randomly selecting ints from all the possible ints for an API that only accepts ints.
If the API accepts any group of bytes possible, fuzz testing is going to randomly generate groups of bytes to try.
The only advantage this has over other forms of testing is that it's not constrained by people thinking "Oh these are the likely inputs to deal with"
This is not quite true, what you are describing is "dumb" fuzzing. Modern fuzzers are coverage guided and will search for and devote more effort to inputs which trigger new branches / paths.
Indeed, this is exactly the type of subtle case you'd worry about when porting. Fuzzing would be unlikely to discover a bug that only occurs on giant inputs or needs a special configuration of lists.
In practice I think it works out okay because most of the time the LLM has written correct code, and when it doesn't it's introduced a dumb bug that's quickly fixed.
Of course, if the LLM introduces subtle bugs, that's even harder to deal with...