Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, this "truthfulness" problem is the real problem with all these "generative search" products.

Forget nudes in generated images. This is the real ethics issue!



You can somewhat detect BS by getting the model to also output the log-probability of its token selection. See https://twitter.com/goodside/status/1581151528949141504 for examples.


I don't think that's going to work.

Probability measure for "Trump is the present President of the US" is likely the very high. It's still untrue.


GPT-3 training data cuts off in October 2019. Not sure if they updated it since last year.


Updating it doesn't make this kind of problem away, unless they figure out a way to have real time updates to the model (could happen)


You wouldn't even need the model to be trained in real time. I'd love to see OpenAI buy Wolfram Research. WolframAlpha has managed to integrate tons of external data into a natural language interface. ChatGPT already knows when to insert placeholders, such as "$XX.XX" or "[city name]" when it doesn't know a specific bit of information. Combining the two could be very powerful. You could have data that's far more current than what's possible by retraining a large model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: