Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know, the translation errors are often pretty weird, like 'hot sauce' being translated to the target equivalent of 'warm sauce'.

Since LLM:s work by probabilistically stringing together sequences of words (or tokens or whatever) I don't expect them to become fully fluent ever, unless natural language degenerates and loses a lot of flexibility and flourish and analogy and so on. Then we might hit some level of expressiveness that they can actually simulate fully.

The current hausse is different but also very similar to the previous age of symbolic AI. Back then they expected computers being able to automate warfare and language and whatnot, but the prime use case turned out to be credit checks.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: