Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes.

I have a background in East Asian cultural studies. A lot more expressions are done via the eyes there rather than the mouth. For the uninitiated, it's subtle, but once you get used to it, it becomes more obvious.

Anthropologists call that display rules and encoding differences. Cultures don’t just express emotion differently, but they also read it differently. A Japanese smile can be social camouflage, while an American smile signals approachability. I guess that's why western animation over-emphasizes the mouth, while eastern animation tend to over-emphasize the eyes.

Why would Yakutian, Indio or Namib populations not have similar phenomeon an AI (or a stereotypical white westerner who does not excessively study those societies/cultures) would not immediately recognise?

AI trained on Western facial databases inherits those perceptual shortcuts. It "learns" to detect happiness by wide mouths and visible teeth, sadness by drooping lips - so anything outside that grammar registers as neutral or misclassified.

And it gets reinforced by (western) users: a hypothetical 'perfect' face-emotion-identification AI would probably be percieved a less reliable to the white western user than the one that mirrors the biasses.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: