Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because the LLM is not a cognitive entity with a will, it is a plausibility engine trained on human-authored text and interactions.

So when you tell it that it made a mistake, or is stupid, then those things are now prompting it to be more of the same.

And only slightly more obliquely: if part of the context includes the LLM making mistakes, expect similar activations.

Best results come if you throw away such prompts and start again. That is, iterate outside the function, not inside it.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: