> Coding agents write code that doesn't converge, meaning code that they cannot evolve after a while
That's not true, and I'm not sure what that even means. It's totally up to you the human to ensure AI code mergable or evolvable, or meet your quality standard in general. I certainly have had to tell Claude to use different approaches for maintainability, and the result is not different than if I do it myself.
Sure, if you vigilantly review the agent's output and say no when it's wrong (which happens very frequently) then things work. I meant that without such ver close supervision things don't converge because agents make mistakes that compound.
That's not true, and I'm not sure what that even means. It's totally up to you the human to ensure AI code mergable or evolvable, or meet your quality standard in general. I certainly have had to tell Claude to use different approaches for maintainability, and the result is not different than if I do it myself.