Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s strange watching people put so much faith in these so called “AI detection tools”. Nobody really knows how they work yet they’re treated like flawless judges. In practice they’re black boxes that quietly decide who gets flagged for “fraud”, and because the tool said so everyone pretends it must be true. The result is a neat illusion that all the “cheaters” were caught, when in reality the system is mostly just picking people at random and giving the process a fake sense of certainty.

Bizzare and unfair



I hope this could be a "teachable moment" for all involved: have some students complete their assignments in person, then submit their "guaranteed to be not AI written" essays to said AI detection tool. Objectively measure how many false positives it reports.


Famously, a popular AI detector "determined" the Declaration of Independence was written by AI.

https://decrypt.co/286121/ai-detectors-fail-reliability-risk...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: