Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The second is that the LLMs don’t learn once they’re done training, which means I could spend the rest of my life tutoring Claude and it’ll still make the exact same mistakes, which means I’ll never get a return for that time and hypervigilance like I would with an actual junior engineer.

However, this creates a significant return on investment for opensourcing your LLM projects. In fact, you should commit your LLM dialogs along with your code. The LLM won't learn immediately, but it will learn in a few months when the next refresh comes out.



> In fact, you should commit your LLM dialogs along with your code.

Wholeheartedly agree with this.

I think code review will evolve from "Review this code" to "Review this prompt that was used to generate some code"


All LLM output is non-deterministically wrong. Without a human in the loop who understands the code, you are stochastically releasing broken, insecure, unmaintainable software.

Any software engineer who puts a stamp of approval on software they have not read and understood is committing professional malpractice.


We've tried Literate Programming before, and it wasn't helpful.

Mostly because we almost never read code to understand the intention behind the code: we read it to figure out why the fuck it isn't working, and the intentions don't help us answer that.


> In fact, you should commit your LLM dialogs along with your code.

Absolutely, for different reasons including later reviews / visits to the code + prompts.


I wonder if some sort of summarization / gist of the course correction / teaching would work.

For example Cursor has checked-in rules files and there is a way to have the model update the rules themselves based on the conversation




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: