Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Normally if you get those kinds of errors you wouldn’t get any output at all

I am not sure. I disagree. If there is a pro-chatGPT user, I'm probably it.

Ive often seen it give significantly less effort to answer the question.



Interesting. I can maybe try finetuning one or two of the so-called 'uncensored' open models and see if that makes a difference. A bit harder to switch out the dataset completely, as that's really what I'm interested in :) I think the general point that finetuning a model for some custom task works is fairly uncontroversial, but if OpenAI's poor performance was on account of these kinds of guardrails it'd be yet another reason someone might want to finetune their own models I guess.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: