Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A big task my team did had measured accuracy in the mid 80% FWIW.

I think the line of thought in this thread is broadly correct. The most value I’ve seen in AI is problems where the cost of being wrong is low and it’s easy to verify the output.

I wonder if anyone is taking good measurements on how frequently an LLM is able to do things like route calls in a call center. My personal experience is not good and I would be surprised if they had 90% accuracy.





I think these kinds of problems were already solved using ML and to a pretty high accuracy.

But now everyone is trying to make chatbots do that job and they are awful at it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: