Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We consistently found that not enumerating an exact list or instructions in the prompt produced better results

Not sure if he means training here or using his product. I think the latter.

My end-user exp of GPT3.5 is that I need to be - not just precise but the exact flavor of precise. It's usually after some trial and error. Then more error. Then more trial.

Getting a useful result on the 1st or 3rd try happens maybe 1 in 10 sessions. A bit more common is having 3.5 include what I clearly asked it not to. It often complies eventually.



OP uses GPT4 mostly. Another poster here observed that "the opposite is required for 3.5" -- so i think your experience makes sense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: