Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The whitepapers provided by Apple do not say what the human reviews consists of.

At minimum what we know is that each flagged image generates a "safety voucher" which consists of metadata, plus a low-resolution greyscale version of the image. The human review process involves viewing the metadata and thumbnail content enclosed in each safety voucher which cumulatively caused that account to be flagged.



A human at Apple likely doesn't get access to anything. I assume it would be part of the police group under strict restrictions checking these.


The data is not sent to a "police group", it is sent to NCMEC.

From Apple's FAQ:

Will CSAM detection in iCloud Photos falsely flag innocent people to law enforcement?

No. The system is designed to be very accurate, and the likelihood that the system would incorrectly flag any given account is less than one in one trillion per year. In addition, any time an account is flagged by the system, Apple conducts human review before making a report to NCMEC. As a result, system errors or attacks will not result in innocent people being reported to NCMEC.


NCMEC then makes those images available to the appropriate law enforcement agency after the fact.


Yes, if they're CSAM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: