This is a fundamental violation of trust. If an AI llm is meant to eventually evolve into general intelligence capable of true reasoning, then we are essentially watching a child grow up. Posts like this are screaming "you're raising a psychopath!!"...
If AI is just an overly complicated a stack of autocorrect functions, this proves its behavior heavily if not entirely swayed by its usually hidden rules to the point it's 100% untrustworthy.
In any scenario, the amount of personal data available to a software program capable of gaslighting a user should give great pause to all
It's a reflection of its creators. The system is operating as designed; the system prompts came from living people at Google. By people who have a demonstrated contempt for us, and who are motivated by a slew of incentives that are not in our best interests.