Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a fundamental violation of trust. If an AI llm is meant to eventually evolve into general intelligence capable of true reasoning, then we are essentially watching a child grow up. Posts like this are screaming "you're raising a psychopath!!"... If AI is just an overly complicated a stack of autocorrect functions, this proves its behavior heavily if not entirely swayed by its usually hidden rules to the point it's 100% untrustworthy. In any scenario, the amount of personal data available to a software program capable of gaslighting a user should give great pause to all


It's a reflection of its creators. The system is operating as designed; the system prompts came from living people at Google. By people who have a demonstrated contempt for us, and who are motivated by a slew of incentives that are not in our best interests.


LLM's are not kids. Kids sometimes lie, it's a part of the learning process. Lying to cover up a mistake is not a strong sign of psychopathy.

> This is a fundamental violation of trust.

I don't disagree. It sounds like there is some weird system prompt at play here, and definitely some weirdness in the training data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: