Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You approach it from a data-science perspective and ensure more signal in the direction of the new discovery. Eg saturating / fine-tuning with biased data in the new direction.

The "thinking" paradigm might also be a way of combatting this issue, ensuring the model is primed to say "wait a minute" - but this to me is cheating in a way, it's likely that it works because real thought is full of backtracking and recalling or "gut feelings" that something isn't entirely correct.

The models don't "know". They're just more likely to say one thing over another which is closer to recall of information.

These "databases" that talk back are an interesting illusion but the inconsistency is what you seem to be trying to nail here.

They have all the information encoded inside but don't layer that information logically and instead surface it based on "vibes".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: