Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It could be self-motivated bias but I speculate it is likely not even that. Enforced bias. The person doesn't actually believe this is the correct moderation yet drifts to the direction of "online acceptable" to stay out of trouble. To get activists of their back.

You can be sure that if you launch a tool like this, within the hour the first tweet comes "look how racist this is!!!!!", which then goes viral.

The particular user having that goal in mind already, and supremely happy that it is so. It validates them and rewards them, this hysterical addiction to division and outrage. The AI being racist (or perceived to be so) didn't happen to them from good faith usage (as an unwelcome surprise they came across), the outcome is actively chased and fabricated: how can I make it look racist?

That's how broken and hostile the environment has become. In these conditions, both companies and individuals have to over-correct.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: