Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It depends on their test dataset. If the test set was written 80% by AI and 20% by humans, a tool that labels every essay as AI-written would have a reported accuracy of 80%. That's why other metrics such as specificity and sensitivity (among many others) are commonly reported as well.

Just speaking in general here -- I don't know what specific phrasing TurnItIn uses.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: