> The second is that the LLMs don’t learn once they’re done training, which means I could spend the rest of my life tutoring Claude and it’ll still make the exact same mistakes, which means I’ll never get a return for that time and hypervigilance like I would with an actual junior engineer.
However, this creates a significant return on investment for opensourcing your LLM projects. In fact, you should commit your LLM dialogs along with your code. The LLM won't learn immediately, but it will learn in a few months when the next refresh comes out.
All LLM output is non-deterministically wrong. Without a human in the loop who understands the code, you are stochastically releasing broken, insecure, unmaintainable software.
Any software engineer who puts a stamp of approval on software they have not read and understood is committing professional malpractice.
We've tried Literate Programming before, and it wasn't helpful.
Mostly because we almost never read code to understand the intention behind the code: we read it to figure out why the fuck it isn't working, and the intentions don't help us answer that.
However, this creates a significant return on investment for opensourcing your LLM projects. In fact, you should commit your LLM dialogs along with your code. The LLM won't learn immediately, but it will learn in a few months when the next refresh comes out.