Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It becoming more and more clear that for "Open"AI the whole "AI-safety/alignment" thing has been a PR-stunt to attract workers, cover the actual current issues with AI (eg stealing data, use for producing cheap junk, hallucinations and societal impact), and build rapport in the AI scene and politics. Now that they have reached a real product and have a strong position in AI development, they could not care less about these things. Those who -naively- believed in the "existential risk" PR stunt and were working on that are now discarded.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: