Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Undesirable output might be more accurate, since there is absolutely no difference in the process of creating a useful output vs a “hallucination” other than the utility of the resulting data.

I had a partially formed insight along these lines, that LLMs exist in this latent space of information that has so little external grounding. A sort of deeamspace. I wonder if embodying them in robots will anchor them to some kind of ground-truth source?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: